00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 2362 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3623 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.129 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.130 The recommended git tool is: git 00:00:00.130 using credential 00000000-0000-0000-0000-000000000002 00:00:00.132 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.164 Fetching changes from the remote Git repository 00:00:00.165 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.194 Using shallow fetch with depth 1 00:00:00.194 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.194 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.241 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.241 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.762 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.772 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.782 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:06.782 > git config core.sparsecheckout # timeout=10 00:00:06.793 > git read-tree -mu HEAD # timeout=10 00:00:06.808 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:06.825 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:06.826 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:06.908 [Pipeline] Start of Pipeline 00:00:06.919 [Pipeline] library 00:00:06.920 Loading library shm_lib@master 00:00:06.920 Library shm_lib@master is cached. Copying from home. 00:00:06.933 [Pipeline] node 00:00:06.965 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.966 [Pipeline] { 00:00:06.974 [Pipeline] catchError 00:00:06.976 [Pipeline] { 00:00:06.985 [Pipeline] wrap 00:00:06.991 [Pipeline] { 00:00:06.999 [Pipeline] stage 00:00:07.000 [Pipeline] { (Prologue) 00:00:07.020 [Pipeline] echo 00:00:07.022 Node: VM-host-SM38 00:00:07.029 [Pipeline] cleanWs 00:00:07.042 [WS-CLEANUP] Deleting project workspace... 00:00:07.042 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.050 [WS-CLEANUP] done 00:00:07.290 [Pipeline] setCustomBuildProperty 00:00:07.444 [Pipeline] httpRequest 00:00:07.893 [Pipeline] echo 00:00:07.894 Sorcerer 10.211.164.101 is alive 00:00:07.901 [Pipeline] retry 00:00:07.902 [Pipeline] { 00:00:07.912 [Pipeline] httpRequest 00:00:07.916 HttpMethod: GET 00:00:07.917 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.917 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:07.918 Response Code: HTTP/1.1 200 OK 00:00:07.919 Success: Status code 200 is in the accepted range: 200,404 00:00:07.919 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.995 [Pipeline] } 00:00:09.012 [Pipeline] // retry 00:00:09.019 [Pipeline] sh 00:00:09.300 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.314 [Pipeline] httpRequest 00:00:09.641 [Pipeline] echo 00:00:09.643 Sorcerer 10.211.164.101 is alive 00:00:09.652 [Pipeline] retry 00:00:09.654 [Pipeline] { 00:00:09.670 [Pipeline] httpRequest 00:00:09.675 HttpMethod: GET 00:00:09.675 URL: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.676 Sending request to url: http://10.211.164.101/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:09.697 Response Code: HTTP/1.1 200 OK 00:00:09.697 Success: Status code 200 is in the accepted range: 200,404 00:00:09.698 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:16.843 [Pipeline] } 00:01:16.860 [Pipeline] // retry 00:01:16.868 [Pipeline] sh 00:01:17.145 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:19.694 [Pipeline] sh 00:01:19.978 + git -C spdk log --oneline -n5 00:01:19.978 c13c99a5e test: Various fixes for Fedora40 00:01:19.978 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:19.978 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:19.978 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:19.978 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:19.998 [Pipeline] writeFile 00:01:20.057 [Pipeline] sh 00:01:20.343 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:20.356 [Pipeline] sh 00:01:20.641 + cat autorun-spdk.conf 00:01:20.641 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.641 SPDK_TEST_NVME=1 00:01:20.641 SPDK_TEST_FTL=1 00:01:20.641 SPDK_TEST_ISAL=1 00:01:20.641 SPDK_RUN_ASAN=1 00:01:20.641 SPDK_RUN_UBSAN=1 00:01:20.641 SPDK_TEST_XNVME=1 00:01:20.641 SPDK_TEST_NVME_FDP=1 00:01:20.641 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.650 RUN_NIGHTLY=1 00:01:20.652 [Pipeline] } 00:01:20.664 [Pipeline] // stage 00:01:20.679 [Pipeline] stage 00:01:20.681 [Pipeline] { (Run VM) 00:01:20.693 [Pipeline] sh 00:01:20.980 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:20.980 + echo 'Start stage prepare_nvme.sh' 00:01:20.980 Start stage prepare_nvme.sh 00:01:20.980 + [[ -n 8 ]] 00:01:20.980 + disk_prefix=ex8 00:01:20.980 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:20.980 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:20.980 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:20.980 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:20.980 ++ SPDK_TEST_NVME=1 00:01:20.980 ++ SPDK_TEST_FTL=1 00:01:20.980 ++ SPDK_TEST_ISAL=1 00:01:20.980 ++ SPDK_RUN_ASAN=1 00:01:20.980 ++ SPDK_RUN_UBSAN=1 00:01:20.980 ++ SPDK_TEST_XNVME=1 00:01:20.980 ++ SPDK_TEST_NVME_FDP=1 00:01:20.980 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:20.980 ++ RUN_NIGHTLY=1 00:01:20.980 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:20.980 + nvme_files=() 00:01:20.980 + declare -A nvme_files 00:01:20.980 + backend_dir=/var/lib/libvirt/images/backends 00:01:20.980 + nvme_files['nvme.img']=5G 00:01:20.980 + nvme_files['nvme-cmb.img']=5G 00:01:20.980 + nvme_files['nvme-multi0.img']=4G 00:01:20.980 + nvme_files['nvme-multi1.img']=4G 00:01:20.980 + nvme_files['nvme-multi2.img']=4G 00:01:20.980 + nvme_files['nvme-openstack.img']=8G 00:01:20.980 + nvme_files['nvme-zns.img']=5G 00:01:20.980 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:20.980 + (( SPDK_TEST_FTL == 1 )) 00:01:20.980 + nvme_files["nvme-ftl.img"]=6G 00:01:20.980 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:20.980 + nvme_files["nvme-fdp.img"]=1G 00:01:20.980 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:20.980 + for nvme in "${!nvme_files[@]}" 00:01:20.980 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi2.img -s 4G 00:01:20.980 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.980 + for nvme in "${!nvme_files[@]}" 00:01:20.980 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-ftl.img -s 6G 00:01:20.980 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:20.980 + for nvme in "${!nvme_files[@]}" 00:01:20.980 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-cmb.img -s 5G 00:01:20.980 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.980 + for nvme in "${!nvme_files[@]}" 00:01:20.981 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-openstack.img -s 8G 00:01:20.981 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:20.981 + for nvme in "${!nvme_files[@]}" 00:01:20.981 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-zns.img -s 5G 00:01:20.981 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:20.981 + for nvme in "${!nvme_files[@]}" 00:01:20.981 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi1.img -s 4G 00:01:20.981 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:20.981 + for nvme in "${!nvme_files[@]}" 00:01:20.981 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-multi0.img -s 4G 00:01:21.242 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:21.242 + for nvme in "${!nvme_files[@]}" 00:01:21.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme-fdp.img -s 1G 00:01:21.242 Formatting '/var/lib/libvirt/images/backends/ex8-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:21.242 + for nvme in "${!nvme_files[@]}" 00:01:21.242 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex8-nvme.img -s 5G 00:01:21.242 Formatting '/var/lib/libvirt/images/backends/ex8-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:21.242 ++ sudo grep -rl ex8-nvme.img /etc/libvirt/qemu 00:01:21.242 + echo 'End stage prepare_nvme.sh' 00:01:21.242 End stage prepare_nvme.sh 00:01:21.256 [Pipeline] sh 00:01:21.541 + DISTRO=fedora39 00:01:21.541 + CPUS=10 00:01:21.541 + RAM=12288 00:01:21.541 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:21.541 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex8-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex8-nvme.img -b /var/lib/libvirt/images/backends/ex8-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex8-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:21.541 00:01:21.541 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:21.541 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:21.541 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:21.541 HELP=0 00:01:21.541 DRY_RUN=0 00:01:21.541 NVME_FILE=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,/var/lib/libvirt/images/backends/ex8-nvme.img,/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,/var/lib/libvirt/images/backends/ex8-nvme-fdp.img, 00:01:21.541 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:21.541 NVME_AUTO_CREATE=0 00:01:21.541 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex8-nvme-multi1.img:/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,, 00:01:21.541 NVME_CMB=,,,, 00:01:21.541 NVME_PMR=,,,, 00:01:21.541 NVME_ZNS=,,,, 00:01:21.541 NVME_MS=true,,,, 00:01:21.541 NVME_FDP=,,,on, 00:01:21.541 SPDK_VAGRANT_DISTRO=fedora39 00:01:21.541 SPDK_VAGRANT_VMCPU=10 00:01:21.541 SPDK_VAGRANT_VMRAM=12288 00:01:21.541 SPDK_VAGRANT_PROVIDER=libvirt 00:01:21.541 SPDK_VAGRANT_HTTP_PROXY= 00:01:21.541 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:21.541 SPDK_OPENSTACK_NETWORK=0 00:01:21.541 VAGRANT_PACKAGE_BOX=0 00:01:21.541 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:21.541 FORCE_DISTRO=true 00:01:21.541 VAGRANT_BOX_VERSION= 00:01:21.541 EXTRA_VAGRANTFILES= 00:01:21.541 NIC_MODEL=e1000 00:01:21.541 00:01:21.541 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:21.542 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:24.087 Bringing machine 'default' up with 'libvirt' provider... 00:01:24.087 ==> default: Creating image (snapshot of base box volume). 00:01:24.349 ==> default: Creating domain with the following settings... 00:01:24.349 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731168643_c4db2b63249734cfa8d9 00:01:24.349 ==> default: -- Domain type: kvm 00:01:24.349 ==> default: -- Cpus: 10 00:01:24.349 ==> default: -- Feature: acpi 00:01:24.349 ==> default: -- Feature: apic 00:01:24.349 ==> default: -- Feature: pae 00:01:24.349 ==> default: -- Memory: 12288M 00:01:24.349 ==> default: -- Memory Backing: hugepages: 00:01:24.349 ==> default: -- Management MAC: 00:01:24.349 ==> default: -- Loader: 00:01:24.349 ==> default: -- Nvram: 00:01:24.349 ==> default: -- Base box: spdk/fedora39 00:01:24.349 ==> default: -- Storage pool: default 00:01:24.349 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731168643_c4db2b63249734cfa8d9.img (20G) 00:01:24.349 ==> default: -- Volume Cache: default 00:01:24.349 ==> default: -- Kernel: 00:01:24.349 ==> default: -- Initrd: 00:01:24.349 ==> default: -- Graphics Type: vnc 00:01:24.349 ==> default: -- Graphics Port: -1 00:01:24.349 ==> default: -- Graphics IP: 127.0.0.1 00:01:24.349 ==> default: -- Graphics Password: Not defined 00:01:24.349 ==> default: -- Video Type: cirrus 00:01:24.349 ==> default: -- Video VRAM: 9216 00:01:24.349 ==> default: -- Sound Type: 00:01:24.349 ==> default: -- Keymap: en-us 00:01:24.349 ==> default: -- TPM Path: 00:01:24.349 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:24.349 ==> default: -- Command line args: 00:01:24.349 ==> default: -> value=-device, 00:01:24.349 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:24.349 ==> default: -> value=-drive, 00:01:24.349 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:24.349 ==> default: -> value=-device, 00:01:24.349 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:24.349 ==> default: -> value=-device, 00:01:24.349 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:24.349 ==> default: -> value=-drive, 00:01:24.349 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme.img,if=none,id=nvme-1-drive0, 00:01:24.349 ==> default: -> value=-device, 00:01:24.349 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.349 ==> default: -> value=-device, 00:01:24.349 ==> default: -> value=nvme,id=nvme-2,serial=12342, 00:01:24.349 ==> default: -> value=-drive, 00:01:24.349 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:24.349 ==> default: -> value=-device, 00:01:24.349 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.349 ==> default: -> value=-drive, 00:01:24.349 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:24.349 ==> default: -> value=-device, 00:01:24.349 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.349 ==> default: -> value=-drive, 00:01:24.349 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:24.349 ==> default: -> value=-device, 00:01:24.349 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.349 ==> default: -> value=-device, 00:01:24.349 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:24.349 ==> default: -> value=-device, 00:01:24.349 ==> default: -> value=nvme,id=nvme-3,serial=12343,subsys=fdp-subsys3, 00:01:24.349 ==> default: -> value=-drive, 00:01:24.349 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex8-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:24.349 ==> default: -> value=-device, 00:01:24.349 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:24.611 ==> default: Creating shared folders metadata... 00:01:24.611 ==> default: Starting domain. 00:01:26.529 ==> default: Waiting for domain to get an IP address... 00:01:48.537 ==> default: Waiting for SSH to become available... 00:01:48.537 ==> default: Configuring and enabling network interfaces... 00:01:51.856 default: SSH address: 192.168.121.73:22 00:01:51.856 default: SSH username: vagrant 00:01:51.856 default: SSH auth method: private key 00:01:53.772 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:01.915 ==> default: Mounting SSHFS shared folder... 00:02:03.303 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:03.303 ==> default: Checking Mount.. 00:02:04.733 ==> default: Folder Successfully Mounted! 00:02:04.733 00:02:04.733 SUCCESS! 00:02:04.733 00:02:04.733 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:04.733 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:04.733 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:04.733 00:02:04.744 [Pipeline] } 00:02:04.758 [Pipeline] // stage 00:02:04.767 [Pipeline] dir 00:02:04.768 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:04.769 [Pipeline] { 00:02:04.782 [Pipeline] catchError 00:02:04.784 [Pipeline] { 00:02:04.796 [Pipeline] sh 00:02:05.080 + vagrant ssh-config --host vagrant 00:02:05.080 + sed -ne '/^Host/,$p' 00:02:05.080 + tee ssh_conf 00:02:07.623 Host vagrant 00:02:07.623 HostName 192.168.121.73 00:02:07.623 User vagrant 00:02:07.623 Port 22 00:02:07.623 UserKnownHostsFile /dev/null 00:02:07.623 StrictHostKeyChecking no 00:02:07.623 PasswordAuthentication no 00:02:07.623 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:07.623 IdentitiesOnly yes 00:02:07.623 LogLevel FATAL 00:02:07.623 ForwardAgent yes 00:02:07.623 ForwardX11 yes 00:02:07.623 00:02:07.637 [Pipeline] withEnv 00:02:07.639 [Pipeline] { 00:02:07.651 [Pipeline] sh 00:02:07.933 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:07.933 source /etc/os-release 00:02:07.933 [[ -e /image.version ]] && img=$(< /image.version) 00:02:07.933 # Minimal, systemd-like check. 00:02:07.933 if [[ -e /.dockerenv ]]; then 00:02:07.933 # Clear garbage from the node'\''s name: 00:02:07.933 # agt-er_autotest_547-896 -> autotest_547-896 00:02:07.933 # $HOSTNAME is the actual container id 00:02:07.933 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:07.933 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:07.933 # We can assume this is a mount from a host where container is running, 00:02:07.933 # so fetch its hostname to easily identify the target swarm worker. 00:02:07.933 container="$(< /etc/hostname) ($agent)" 00:02:07.933 else 00:02:07.933 # Fallback 00:02:07.933 container=$agent 00:02:07.933 fi 00:02:07.933 fi 00:02:07.933 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:07.933 ' 00:02:08.215 [Pipeline] } 00:02:08.231 [Pipeline] // withEnv 00:02:08.239 [Pipeline] setCustomBuildProperty 00:02:08.252 [Pipeline] stage 00:02:08.254 [Pipeline] { (Tests) 00:02:08.270 [Pipeline] sh 00:02:08.554 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:08.830 [Pipeline] sh 00:02:09.115 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:09.413 [Pipeline] timeout 00:02:09.414 Timeout set to expire in 50 min 00:02:09.424 [Pipeline] { 00:02:09.434 [Pipeline] sh 00:02:09.718 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:10.292 HEAD is now at c13c99a5e test: Various fixes for Fedora40 00:02:10.306 [Pipeline] sh 00:02:10.593 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:10.869 [Pipeline] sh 00:02:11.151 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:11.432 [Pipeline] sh 00:02:11.740 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:12.002 ++ readlink -f spdk_repo 00:02:12.002 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:12.002 + [[ -n /home/vagrant/spdk_repo ]] 00:02:12.002 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:12.002 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:12.002 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:12.002 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:12.002 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:12.002 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:12.002 + cd /home/vagrant/spdk_repo 00:02:12.002 + source /etc/os-release 00:02:12.002 ++ NAME='Fedora Linux' 00:02:12.002 ++ VERSION='39 (Cloud Edition)' 00:02:12.002 ++ ID=fedora 00:02:12.002 ++ VERSION_ID=39 00:02:12.002 ++ VERSION_CODENAME= 00:02:12.002 ++ PLATFORM_ID=platform:f39 00:02:12.002 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:12.002 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:12.002 ++ LOGO=fedora-logo-icon 00:02:12.002 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:12.002 ++ HOME_URL=https://fedoraproject.org/ 00:02:12.002 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:12.002 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:12.002 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:12.002 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:12.002 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:12.002 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:12.002 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:12.002 ++ SUPPORT_END=2024-11-12 00:02:12.002 ++ VARIANT='Cloud Edition' 00:02:12.002 ++ VARIANT_ID=cloud 00:02:12.002 + uname -a 00:02:12.002 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:12.002 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:12.002 Hugepages 00:02:12.002 node hugesize free / total 00:02:12.002 node0 1048576kB 0 / 0 00:02:12.002 node0 2048kB 0 / 0 00:02:12.002 00:02:12.002 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:12.002 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:12.002 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:12.002 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:12.002 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:12.263 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:12.263 + rm -f /tmp/spdk-ld-path 00:02:12.263 + source autorun-spdk.conf 00:02:12.263 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.263 ++ SPDK_TEST_NVME=1 00:02:12.263 ++ SPDK_TEST_FTL=1 00:02:12.263 ++ SPDK_TEST_ISAL=1 00:02:12.263 ++ SPDK_RUN_ASAN=1 00:02:12.263 ++ SPDK_RUN_UBSAN=1 00:02:12.264 ++ SPDK_TEST_XNVME=1 00:02:12.264 ++ SPDK_TEST_NVME_FDP=1 00:02:12.264 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:12.264 ++ RUN_NIGHTLY=1 00:02:12.264 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:12.264 + [[ -n '' ]] 00:02:12.264 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:12.264 + for M in /var/spdk/build-*-manifest.txt 00:02:12.264 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:12.264 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.264 + for M in /var/spdk/build-*-manifest.txt 00:02:12.264 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:12.264 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.264 + for M in /var/spdk/build-*-manifest.txt 00:02:12.264 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:12.264 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:12.264 ++ uname 00:02:12.264 + [[ Linux == \L\i\n\u\x ]] 00:02:12.264 + sudo dmesg -T 00:02:12.264 + sudo dmesg --clear 00:02:12.264 + dmesg_pid=4988 00:02:12.264 + [[ Fedora Linux == FreeBSD ]] 00:02:12.264 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:12.264 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:12.264 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:12.264 + [[ -x /usr/src/fio-static/fio ]] 00:02:12.264 + sudo dmesg -Tw 00:02:12.264 + export FIO_BIN=/usr/src/fio-static/fio 00:02:12.264 + FIO_BIN=/usr/src/fio-static/fio 00:02:12.264 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:12.264 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:12.264 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:12.264 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:12.264 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:12.264 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:12.264 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:12.264 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:12.264 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:12.264 Test configuration: 00:02:12.264 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.264 SPDK_TEST_NVME=1 00:02:12.264 SPDK_TEST_FTL=1 00:02:12.264 SPDK_TEST_ISAL=1 00:02:12.264 SPDK_RUN_ASAN=1 00:02:12.264 SPDK_RUN_UBSAN=1 00:02:12.264 SPDK_TEST_XNVME=1 00:02:12.264 SPDK_TEST_NVME_FDP=1 00:02:12.264 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:12.264 RUN_NIGHTLY=1 16:11:31 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:02:12.264 16:11:31 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:12.264 16:11:31 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:12.264 16:11:31 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.264 16:11:31 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.264 16:11:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.264 16:11:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.264 16:11:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.264 16:11:31 -- paths/export.sh@5 -- $ export PATH 00:02:12.264 16:11:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.264 16:11:32 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:12.264 16:11:32 -- common/autobuild_common.sh@440 -- $ date +%s 00:02:12.264 16:11:32 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731168692.XXXXXX 00:02:12.264 16:11:32 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731168692.IkTTqp 00:02:12.264 16:11:32 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:02:12.264 16:11:32 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:02:12.264 16:11:32 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:12.264 16:11:32 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:12.264 16:11:32 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:12.264 16:11:32 -- common/autobuild_common.sh@456 -- $ get_config_params 00:02:12.264 16:11:32 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:02:12.264 16:11:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.525 16:11:32 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:12.525 16:11:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:12.525 16:11:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:12.525 16:11:32 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:12.525 16:11:32 -- spdk/autobuild.sh@16 -- $ date -u 00:02:12.525 Sat Nov 9 04:11:32 PM UTC 2024 00:02:12.525 16:11:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:12.525 LTS-67-gc13c99a5e 00:02:12.525 16:11:32 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:12.525 16:11:32 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:12.525 16:11:32 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:12.525 16:11:32 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:12.525 16:11:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.525 ************************************ 00:02:12.525 START TEST asan 00:02:12.525 ************************************ 00:02:12.525 using asan 00:02:12.525 16:11:32 -- common/autotest_common.sh@1114 -- $ echo 'using asan' 00:02:12.525 00:02:12.525 real 0m0.000s 00:02:12.525 user 0m0.000s 00:02:12.525 sys 0m0.000s 00:02:12.525 ************************************ 00:02:12.525 END TEST asan 00:02:12.525 ************************************ 00:02:12.525 16:11:32 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:12.525 16:11:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.525 16:11:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:12.525 16:11:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:12.525 16:11:32 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:12.525 16:11:32 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:12.525 16:11:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.525 ************************************ 00:02:12.525 START TEST ubsan 00:02:12.525 ************************************ 00:02:12.525 using ubsan 00:02:12.525 16:11:32 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:02:12.525 00:02:12.525 real 0m0.000s 00:02:12.525 user 0m0.000s 00:02:12.525 sys 0m0.000s 00:02:12.525 ************************************ 00:02:12.525 END TEST ubsan 00:02:12.525 ************************************ 00:02:12.525 16:11:32 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:12.525 16:11:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.525 16:11:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:12.525 16:11:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:12.525 16:11:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:12.525 16:11:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:12.525 16:11:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:12.525 16:11:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:12.525 16:11:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:12.525 16:11:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:12.525 16:11:32 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:12.525 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:12.525 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:13.097 Using 'verbs' RDMA provider 00:02:25.936 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:02:35.946 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:35.946 Creating mk/config.mk...done. 00:02:35.946 Creating mk/cc.flags.mk...done. 00:02:35.946 Type 'make' to build. 00:02:35.946 16:11:55 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:02:35.946 16:11:55 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:35.946 16:11:55 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:35.946 16:11:55 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.946 ************************************ 00:02:35.946 START TEST make 00:02:35.946 ************************************ 00:02:35.946 16:11:55 -- common/autotest_common.sh@1114 -- $ make -j10 00:02:35.946 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:35.946 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:35.946 meson setup builddir \ 00:02:35.946 -Dwith-libaio=enabled \ 00:02:35.946 -Dwith-liburing=enabled \ 00:02:35.946 -Dwith-libvfn=disabled \ 00:02:35.946 -Dwith-spdk=false && \ 00:02:35.946 meson compile -C builddir && \ 00:02:35.946 cd -) 00:02:35.946 make[1]: Nothing to be done for 'all'. 00:02:38.495 The Meson build system 00:02:38.495 Version: 1.5.0 00:02:38.495 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:38.495 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:38.495 Build type: native build 00:02:38.495 Project name: xnvme 00:02:38.495 Project version: 0.7.3 00:02:38.495 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:38.495 C linker for the host machine: cc ld.bfd 2.40-14 00:02:38.495 Host machine cpu family: x86_64 00:02:38.495 Host machine cpu: x86_64 00:02:38.495 Message: host_machine.system: linux 00:02:38.495 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:38.495 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:38.495 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:38.495 Run-time dependency threads found: YES 00:02:38.495 Has header "setupapi.h" : NO 00:02:38.495 Has header "linux/blkzoned.h" : YES 00:02:38.495 Has header "linux/blkzoned.h" : YES (cached) 00:02:38.495 Has header "libaio.h" : YES 00:02:38.495 Library aio found: YES 00:02:38.495 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:38.495 Run-time dependency liburing found: YES 2.2 00:02:38.495 Dependency libvfn skipped: feature with-libvfn disabled 00:02:38.495 Run-time dependency appleframeworks found: NO (tried framework) 00:02:38.495 Run-time dependency appleframeworks found: NO (tried framework) 00:02:38.495 Configuring xnvme_config.h using configuration 00:02:38.495 Configuring xnvme.spec using configuration 00:02:38.495 Run-time dependency bash-completion found: YES 2.11 00:02:38.495 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:38.495 Program cp found: YES (/usr/bin/cp) 00:02:38.495 Has header "winsock2.h" : NO 00:02:38.495 Has header "dbghelp.h" : NO 00:02:38.495 Library rpcrt4 found: NO 00:02:38.495 Library rt found: YES 00:02:38.495 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:38.495 Found CMake: /usr/bin/cmake (3.27.7) 00:02:38.495 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:38.495 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:38.495 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:38.495 Build targets in project: 32 00:02:38.495 00:02:38.495 xnvme 0.7.3 00:02:38.495 00:02:38.495 User defined options 00:02:38.495 with-libaio : enabled 00:02:38.495 with-liburing: enabled 00:02:38.495 with-libvfn : disabled 00:02:38.495 with-spdk : false 00:02:38.495 00:02:38.495 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:38.495 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:38.495 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:38.757 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:38.757 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:38.757 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:38.757 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:38.757 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:38.757 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:38.757 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:38.757 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:38.757 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:38.757 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:38.757 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:38.757 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:38.757 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:38.757 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:38.757 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:38.757 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:38.757 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:38.757 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:38.757 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:38.757 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:39.017 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:39.017 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:39.017 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:39.017 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:39.017 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:39.017 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:39.017 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:39.017 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:39.017 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:39.017 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:39.017 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:39.017 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:39.017 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:39.017 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:39.017 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:39.017 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:39.017 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:39.017 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:39.017 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:39.017 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:39.017 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:39.017 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:39.017 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:39.017 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:39.017 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:39.017 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:39.017 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:39.017 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:39.017 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:39.017 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:39.017 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:39.017 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:39.017 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:39.017 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:39.017 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:39.017 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:39.017 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:39.276 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:39.276 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:39.276 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:39.276 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:39.276 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:39.276 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:39.276 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:39.276 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:39.276 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:39.276 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:39.276 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:39.276 [70/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:39.276 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:39.276 [72/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:39.276 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:39.276 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:39.276 [75/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:39.276 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:39.276 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:39.276 [78/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:39.276 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:39.534 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:39.535 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:39.535 [82/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:39.535 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:39.535 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:39.535 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:39.535 [86/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:39.535 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:39.535 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:39.535 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:39.535 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:39.535 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:39.535 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:39.535 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:39.535 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:39.535 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:39.535 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:39.535 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:39.535 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:39.535 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:39.535 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:39.535 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:39.535 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:39.535 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:39.535 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:39.535 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:39.535 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:39.535 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:39.535 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:39.793 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:39.793 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:39.793 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:39.793 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:39.793 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:39.793 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:39.793 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:39.793 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:39.793 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:39.793 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:39.793 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:39.793 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:39.793 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:39.793 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:39.793 [123/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:39.793 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:39.793 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:39.793 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:39.793 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:39.793 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:39.793 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:39.793 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:39.793 [131/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:39.793 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:39.793 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:39.793 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:39.793 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:39.793 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:39.793 [137/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:39.793 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:39.793 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:40.050 [140/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:40.050 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:40.050 [142/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:40.050 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:40.050 [144/203] Linking target lib/libxnvme.so 00:02:40.050 [145/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:40.050 [146/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:40.050 [147/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:40.050 [148/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:40.050 [149/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:40.050 [150/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:40.050 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:40.050 [152/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:40.050 [153/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:40.050 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:40.050 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:40.306 [156/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:40.306 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:40.306 [158/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:40.306 [159/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:40.306 [160/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:40.306 [161/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:40.306 [162/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:40.306 [163/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:40.306 [164/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:40.306 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:40.306 [166/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:40.306 [167/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:40.306 [168/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:40.306 [169/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:40.306 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:40.563 [171/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:40.563 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:40.563 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:40.563 [174/203] Linking static target lib/libxnvme.a 00:02:40.563 [175/203] Linking target tests/xnvme_tests_buf 00:02:40.563 [176/203] Linking target tests/xnvme_tests_lblk 00:02:40.563 [177/203] Linking target tests/xnvme_tests_cli 00:02:40.563 [178/203] Linking target tests/xnvme_tests_enum 00:02:40.563 [179/203] Linking target tests/xnvme_tests_scc 00:02:40.563 [180/203] Linking target tests/xnvme_tests_async_intf 00:02:40.563 [181/203] Linking target tests/xnvme_tests_ioworker 00:02:40.564 [182/203] Linking target tests/xnvme_tests_znd_append 00:02:40.564 [183/203] Linking target tests/xnvme_tests_znd_state 00:02:40.564 [184/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:40.564 [185/203] Linking target tests/xnvme_tests_xnvme_file 00:02:40.564 [186/203] Linking target tests/xnvme_tests_map 00:02:40.564 [187/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:40.564 [188/203] Linking target tools/lblk 00:02:40.564 [189/203] Linking target tools/xnvme 00:02:40.564 [190/203] Linking target tools/xdd 00:02:40.564 [191/203] Linking target tests/xnvme_tests_kvs 00:02:40.564 [192/203] Linking target tools/xnvme_file 00:02:40.564 [193/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:40.564 [194/203] Linking target examples/xnvme_dev 00:02:40.564 [195/203] Linking target examples/xnvme_enum 00:02:40.564 [196/203] Linking target tools/zoned 00:02:40.564 [197/203] Linking target examples/xnvme_single_async 00:02:40.564 [198/203] Linking target tools/kvs 00:02:40.564 [199/203] Linking target examples/xnvme_hello 00:02:40.564 [200/203] Linking target examples/zoned_io_sync 00:02:40.564 [201/203] Linking target examples/xnvme_single_sync 00:02:40.564 [202/203] Linking target examples/xnvme_io_async 00:02:40.564 [203/203] Linking target examples/zoned_io_async 00:02:40.821 INFO: autodetecting backend as ninja 00:02:40.821 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:40.821 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:45.026 The Meson build system 00:02:45.026 Version: 1.5.0 00:02:45.026 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:45.026 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:45.026 Build type: native build 00:02:45.026 Program cat found: YES (/usr/bin/cat) 00:02:45.026 Project name: DPDK 00:02:45.026 Project version: 23.11.0 00:02:45.026 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:45.026 C linker for the host machine: cc ld.bfd 2.40-14 00:02:45.026 Host machine cpu family: x86_64 00:02:45.026 Host machine cpu: x86_64 00:02:45.026 Message: ## Building in Developer Mode ## 00:02:45.026 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:45.027 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:45.027 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:45.027 Program python3 found: YES (/usr/bin/python3) 00:02:45.027 Program cat found: YES (/usr/bin/cat) 00:02:45.027 Compiler for C supports arguments -march=native: YES 00:02:45.027 Checking for size of "void *" : 8 00:02:45.027 Checking for size of "void *" : 8 (cached) 00:02:45.027 Library m found: YES 00:02:45.027 Library numa found: YES 00:02:45.027 Has header "numaif.h" : YES 00:02:45.027 Library fdt found: NO 00:02:45.027 Library execinfo found: NO 00:02:45.027 Has header "execinfo.h" : YES 00:02:45.027 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:45.027 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:45.027 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:45.027 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:45.027 Run-time dependency openssl found: YES 3.1.1 00:02:45.027 Run-time dependency libpcap found: YES 1.10.4 00:02:45.027 Has header "pcap.h" with dependency libpcap: YES 00:02:45.027 Compiler for C supports arguments -Wcast-qual: YES 00:02:45.027 Compiler for C supports arguments -Wdeprecated: YES 00:02:45.027 Compiler for C supports arguments -Wformat: YES 00:02:45.027 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:45.027 Compiler for C supports arguments -Wformat-security: NO 00:02:45.027 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:45.027 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:45.027 Compiler for C supports arguments -Wnested-externs: YES 00:02:45.027 Compiler for C supports arguments -Wold-style-definition: YES 00:02:45.027 Compiler for C supports arguments -Wpointer-arith: YES 00:02:45.027 Compiler for C supports arguments -Wsign-compare: YES 00:02:45.027 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:45.027 Compiler for C supports arguments -Wundef: YES 00:02:45.027 Compiler for C supports arguments -Wwrite-strings: YES 00:02:45.027 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:45.027 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:45.027 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:45.027 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:45.027 Program objdump found: YES (/usr/bin/objdump) 00:02:45.027 Compiler for C supports arguments -mavx512f: YES 00:02:45.027 Checking if "AVX512 checking" compiles: YES 00:02:45.027 Fetching value of define "__SSE4_2__" : 1 00:02:45.027 Fetching value of define "__AES__" : 1 00:02:45.027 Fetching value of define "__AVX__" : 1 00:02:45.027 Fetching value of define "__AVX2__" : 1 00:02:45.027 Fetching value of define "__AVX512BW__" : 1 00:02:45.027 Fetching value of define "__AVX512CD__" : 1 00:02:45.027 Fetching value of define "__AVX512DQ__" : 1 00:02:45.027 Fetching value of define "__AVX512F__" : 1 00:02:45.027 Fetching value of define "__AVX512VL__" : 1 00:02:45.027 Fetching value of define "__PCLMUL__" : 1 00:02:45.027 Fetching value of define "__RDRND__" : 1 00:02:45.027 Fetching value of define "__RDSEED__" : 1 00:02:45.027 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:45.027 Fetching value of define "__znver1__" : (undefined) 00:02:45.027 Fetching value of define "__znver2__" : (undefined) 00:02:45.027 Fetching value of define "__znver3__" : (undefined) 00:02:45.027 Fetching value of define "__znver4__" : (undefined) 00:02:45.027 Library asan found: YES 00:02:45.027 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:45.027 Message: lib/log: Defining dependency "log" 00:02:45.027 Message: lib/kvargs: Defining dependency "kvargs" 00:02:45.027 Message: lib/telemetry: Defining dependency "telemetry" 00:02:45.027 Library rt found: YES 00:02:45.027 Checking for function "getentropy" : NO 00:02:45.027 Message: lib/eal: Defining dependency "eal" 00:02:45.027 Message: lib/ring: Defining dependency "ring" 00:02:45.027 Message: lib/rcu: Defining dependency "rcu" 00:02:45.027 Message: lib/mempool: Defining dependency "mempool" 00:02:45.027 Message: lib/mbuf: Defining dependency "mbuf" 00:02:45.027 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:45.027 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:45.027 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:45.027 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:45.027 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:45.027 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:45.027 Compiler for C supports arguments -mpclmul: YES 00:02:45.027 Compiler for C supports arguments -maes: YES 00:02:45.027 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:45.027 Compiler for C supports arguments -mavx512bw: YES 00:02:45.027 Compiler for C supports arguments -mavx512dq: YES 00:02:45.027 Compiler for C supports arguments -mavx512vl: YES 00:02:45.027 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:45.027 Compiler for C supports arguments -mavx2: YES 00:02:45.027 Compiler for C supports arguments -mavx: YES 00:02:45.027 Message: lib/net: Defining dependency "net" 00:02:45.027 Message: lib/meter: Defining dependency "meter" 00:02:45.027 Message: lib/ethdev: Defining dependency "ethdev" 00:02:45.027 Message: lib/pci: Defining dependency "pci" 00:02:45.027 Message: lib/cmdline: Defining dependency "cmdline" 00:02:45.027 Message: lib/hash: Defining dependency "hash" 00:02:45.027 Message: lib/timer: Defining dependency "timer" 00:02:45.027 Message: lib/compressdev: Defining dependency "compressdev" 00:02:45.027 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:45.027 Message: lib/dmadev: Defining dependency "dmadev" 00:02:45.027 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:45.027 Message: lib/power: Defining dependency "power" 00:02:45.027 Message: lib/reorder: Defining dependency "reorder" 00:02:45.027 Message: lib/security: Defining dependency "security" 00:02:45.027 Has header "linux/userfaultfd.h" : YES 00:02:45.027 Has header "linux/vduse.h" : YES 00:02:45.027 Message: lib/vhost: Defining dependency "vhost" 00:02:45.027 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:45.027 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:45.027 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:45.027 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:45.027 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:45.027 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:45.027 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:45.027 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:45.027 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:45.027 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:45.027 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:45.027 Configuring doxy-api-html.conf using configuration 00:02:45.027 Configuring doxy-api-man.conf using configuration 00:02:45.027 Program mandb found: YES (/usr/bin/mandb) 00:02:45.027 Program sphinx-build found: NO 00:02:45.027 Configuring rte_build_config.h using configuration 00:02:45.027 Message: 00:02:45.027 ================= 00:02:45.027 Applications Enabled 00:02:45.027 ================= 00:02:45.027 00:02:45.027 apps: 00:02:45.027 00:02:45.027 00:02:45.027 Message: 00:02:45.027 ================= 00:02:45.027 Libraries Enabled 00:02:45.027 ================= 00:02:45.027 00:02:45.027 libs: 00:02:45.027 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:45.027 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:45.027 cryptodev, dmadev, power, reorder, security, vhost, 00:02:45.027 00:02:45.027 Message: 00:02:45.027 =============== 00:02:45.027 Drivers Enabled 00:02:45.027 =============== 00:02:45.027 00:02:45.027 common: 00:02:45.027 00:02:45.027 bus: 00:02:45.027 pci, vdev, 00:02:45.027 mempool: 00:02:45.027 ring, 00:02:45.027 dma: 00:02:45.027 00:02:45.027 net: 00:02:45.027 00:02:45.027 crypto: 00:02:45.027 00:02:45.027 compress: 00:02:45.027 00:02:45.027 vdpa: 00:02:45.027 00:02:45.027 00:02:45.027 Message: 00:02:45.027 ================= 00:02:45.027 Content Skipped 00:02:45.027 ================= 00:02:45.027 00:02:45.027 apps: 00:02:45.027 dumpcap: explicitly disabled via build config 00:02:45.027 graph: explicitly disabled via build config 00:02:45.027 pdump: explicitly disabled via build config 00:02:45.027 proc-info: explicitly disabled via build config 00:02:45.027 test-acl: explicitly disabled via build config 00:02:45.027 test-bbdev: explicitly disabled via build config 00:02:45.027 test-cmdline: explicitly disabled via build config 00:02:45.027 test-compress-perf: explicitly disabled via build config 00:02:45.027 test-crypto-perf: explicitly disabled via build config 00:02:45.027 test-dma-perf: explicitly disabled via build config 00:02:45.027 test-eventdev: explicitly disabled via build config 00:02:45.027 test-fib: explicitly disabled via build config 00:02:45.027 test-flow-perf: explicitly disabled via build config 00:02:45.027 test-gpudev: explicitly disabled via build config 00:02:45.027 test-mldev: explicitly disabled via build config 00:02:45.027 test-pipeline: explicitly disabled via build config 00:02:45.027 test-pmd: explicitly disabled via build config 00:02:45.027 test-regex: explicitly disabled via build config 00:02:45.027 test-sad: explicitly disabled via build config 00:02:45.027 test-security-perf: explicitly disabled via build config 00:02:45.027 00:02:45.027 libs: 00:02:45.027 metrics: explicitly disabled via build config 00:02:45.027 acl: explicitly disabled via build config 00:02:45.027 bbdev: explicitly disabled via build config 00:02:45.028 bitratestats: explicitly disabled via build config 00:02:45.028 bpf: explicitly disabled via build config 00:02:45.028 cfgfile: explicitly disabled via build config 00:02:45.028 distributor: explicitly disabled via build config 00:02:45.028 efd: explicitly disabled via build config 00:02:45.028 eventdev: explicitly disabled via build config 00:02:45.028 dispatcher: explicitly disabled via build config 00:02:45.028 gpudev: explicitly disabled via build config 00:02:45.028 gro: explicitly disabled via build config 00:02:45.028 gso: explicitly disabled via build config 00:02:45.028 ip_frag: explicitly disabled via build config 00:02:45.028 jobstats: explicitly disabled via build config 00:02:45.028 latencystats: explicitly disabled via build config 00:02:45.028 lpm: explicitly disabled via build config 00:02:45.028 member: explicitly disabled via build config 00:02:45.028 pcapng: explicitly disabled via build config 00:02:45.028 rawdev: explicitly disabled via build config 00:02:45.028 regexdev: explicitly disabled via build config 00:02:45.028 mldev: explicitly disabled via build config 00:02:45.028 rib: explicitly disabled via build config 00:02:45.028 sched: explicitly disabled via build config 00:02:45.028 stack: explicitly disabled via build config 00:02:45.028 ipsec: explicitly disabled via build config 00:02:45.028 pdcp: explicitly disabled via build config 00:02:45.028 fib: explicitly disabled via build config 00:02:45.028 port: explicitly disabled via build config 00:02:45.028 pdump: explicitly disabled via build config 00:02:45.028 table: explicitly disabled via build config 00:02:45.028 pipeline: explicitly disabled via build config 00:02:45.028 graph: explicitly disabled via build config 00:02:45.028 node: explicitly disabled via build config 00:02:45.028 00:02:45.028 drivers: 00:02:45.028 common/cpt: not in enabled drivers build config 00:02:45.028 common/dpaax: not in enabled drivers build config 00:02:45.028 common/iavf: not in enabled drivers build config 00:02:45.028 common/idpf: not in enabled drivers build config 00:02:45.028 common/mvep: not in enabled drivers build config 00:02:45.028 common/octeontx: not in enabled drivers build config 00:02:45.028 bus/auxiliary: not in enabled drivers build config 00:02:45.028 bus/cdx: not in enabled drivers build config 00:02:45.028 bus/dpaa: not in enabled drivers build config 00:02:45.028 bus/fslmc: not in enabled drivers build config 00:02:45.028 bus/ifpga: not in enabled drivers build config 00:02:45.028 bus/platform: not in enabled drivers build config 00:02:45.028 bus/vmbus: not in enabled drivers build config 00:02:45.028 common/cnxk: not in enabled drivers build config 00:02:45.028 common/mlx5: not in enabled drivers build config 00:02:45.028 common/nfp: not in enabled drivers build config 00:02:45.028 common/qat: not in enabled drivers build config 00:02:45.028 common/sfc_efx: not in enabled drivers build config 00:02:45.028 mempool/bucket: not in enabled drivers build config 00:02:45.028 mempool/cnxk: not in enabled drivers build config 00:02:45.028 mempool/dpaa: not in enabled drivers build config 00:02:45.028 mempool/dpaa2: not in enabled drivers build config 00:02:45.028 mempool/octeontx: not in enabled drivers build config 00:02:45.028 mempool/stack: not in enabled drivers build config 00:02:45.028 dma/cnxk: not in enabled drivers build config 00:02:45.028 dma/dpaa: not in enabled drivers build config 00:02:45.028 dma/dpaa2: not in enabled drivers build config 00:02:45.028 dma/hisilicon: not in enabled drivers build config 00:02:45.028 dma/idxd: not in enabled drivers build config 00:02:45.028 dma/ioat: not in enabled drivers build config 00:02:45.028 dma/skeleton: not in enabled drivers build config 00:02:45.028 net/af_packet: not in enabled drivers build config 00:02:45.028 net/af_xdp: not in enabled drivers build config 00:02:45.028 net/ark: not in enabled drivers build config 00:02:45.028 net/atlantic: not in enabled drivers build config 00:02:45.028 net/avp: not in enabled drivers build config 00:02:45.028 net/axgbe: not in enabled drivers build config 00:02:45.028 net/bnx2x: not in enabled drivers build config 00:02:45.028 net/bnxt: not in enabled drivers build config 00:02:45.028 net/bonding: not in enabled drivers build config 00:02:45.028 net/cnxk: not in enabled drivers build config 00:02:45.028 net/cpfl: not in enabled drivers build config 00:02:45.028 net/cxgbe: not in enabled drivers build config 00:02:45.028 net/dpaa: not in enabled drivers build config 00:02:45.028 net/dpaa2: not in enabled drivers build config 00:02:45.028 net/e1000: not in enabled drivers build config 00:02:45.028 net/ena: not in enabled drivers build config 00:02:45.028 net/enetc: not in enabled drivers build config 00:02:45.028 net/enetfec: not in enabled drivers build config 00:02:45.028 net/enic: not in enabled drivers build config 00:02:45.028 net/failsafe: not in enabled drivers build config 00:02:45.028 net/fm10k: not in enabled drivers build config 00:02:45.028 net/gve: not in enabled drivers build config 00:02:45.028 net/hinic: not in enabled drivers build config 00:02:45.028 net/hns3: not in enabled drivers build config 00:02:45.028 net/i40e: not in enabled drivers build config 00:02:45.028 net/iavf: not in enabled drivers build config 00:02:45.028 net/ice: not in enabled drivers build config 00:02:45.028 net/idpf: not in enabled drivers build config 00:02:45.028 net/igc: not in enabled drivers build config 00:02:45.028 net/ionic: not in enabled drivers build config 00:02:45.028 net/ipn3ke: not in enabled drivers build config 00:02:45.028 net/ixgbe: not in enabled drivers build config 00:02:45.028 net/mana: not in enabled drivers build config 00:02:45.028 net/memif: not in enabled drivers build config 00:02:45.028 net/mlx4: not in enabled drivers build config 00:02:45.028 net/mlx5: not in enabled drivers build config 00:02:45.028 net/mvneta: not in enabled drivers build config 00:02:45.028 net/mvpp2: not in enabled drivers build config 00:02:45.028 net/netvsc: not in enabled drivers build config 00:02:45.028 net/nfb: not in enabled drivers build config 00:02:45.028 net/nfp: not in enabled drivers build config 00:02:45.028 net/ngbe: not in enabled drivers build config 00:02:45.028 net/null: not in enabled drivers build config 00:02:45.028 net/octeontx: not in enabled drivers build config 00:02:45.028 net/octeon_ep: not in enabled drivers build config 00:02:45.028 net/pcap: not in enabled drivers build config 00:02:45.028 net/pfe: not in enabled drivers build config 00:02:45.028 net/qede: not in enabled drivers build config 00:02:45.028 net/ring: not in enabled drivers build config 00:02:45.028 net/sfc: not in enabled drivers build config 00:02:45.028 net/softnic: not in enabled drivers build config 00:02:45.028 net/tap: not in enabled drivers build config 00:02:45.028 net/thunderx: not in enabled drivers build config 00:02:45.028 net/txgbe: not in enabled drivers build config 00:02:45.028 net/vdev_netvsc: not in enabled drivers build config 00:02:45.028 net/vhost: not in enabled drivers build config 00:02:45.028 net/virtio: not in enabled drivers build config 00:02:45.028 net/vmxnet3: not in enabled drivers build config 00:02:45.028 raw/*: missing internal dependency, "rawdev" 00:02:45.028 crypto/armv8: not in enabled drivers build config 00:02:45.028 crypto/bcmfs: not in enabled drivers build config 00:02:45.028 crypto/caam_jr: not in enabled drivers build config 00:02:45.028 crypto/ccp: not in enabled drivers build config 00:02:45.028 crypto/cnxk: not in enabled drivers build config 00:02:45.028 crypto/dpaa_sec: not in enabled drivers build config 00:02:45.028 crypto/dpaa2_sec: not in enabled drivers build config 00:02:45.028 crypto/ipsec_mb: not in enabled drivers build config 00:02:45.028 crypto/mlx5: not in enabled drivers build config 00:02:45.028 crypto/mvsam: not in enabled drivers build config 00:02:45.028 crypto/nitrox: not in enabled drivers build config 00:02:45.028 crypto/null: not in enabled drivers build config 00:02:45.028 crypto/octeontx: not in enabled drivers build config 00:02:45.028 crypto/openssl: not in enabled drivers build config 00:02:45.028 crypto/scheduler: not in enabled drivers build config 00:02:45.028 crypto/uadk: not in enabled drivers build config 00:02:45.028 crypto/virtio: not in enabled drivers build config 00:02:45.028 compress/isal: not in enabled drivers build config 00:02:45.028 compress/mlx5: not in enabled drivers build config 00:02:45.028 compress/octeontx: not in enabled drivers build config 00:02:45.028 compress/zlib: not in enabled drivers build config 00:02:45.028 regex/*: missing internal dependency, "regexdev" 00:02:45.028 ml/*: missing internal dependency, "mldev" 00:02:45.028 vdpa/ifc: not in enabled drivers build config 00:02:45.028 vdpa/mlx5: not in enabled drivers build config 00:02:45.028 vdpa/nfp: not in enabled drivers build config 00:02:45.028 vdpa/sfc: not in enabled drivers build config 00:02:45.028 event/*: missing internal dependency, "eventdev" 00:02:45.028 baseband/*: missing internal dependency, "bbdev" 00:02:45.028 gpu/*: missing internal dependency, "gpudev" 00:02:45.028 00:02:45.028 00:02:45.595 Build targets in project: 84 00:02:45.595 00:02:45.595 DPDK 23.11.0 00:02:45.595 00:02:45.595 User defined options 00:02:45.595 buildtype : debug 00:02:45.595 default_library : shared 00:02:45.595 libdir : lib 00:02:45.595 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:45.595 b_sanitize : address 00:02:45.595 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:02:45.595 c_link_args : 00:02:45.595 cpu_instruction_set: native 00:02:45.595 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:45.595 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:45.595 enable_docs : false 00:02:45.595 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:45.595 enable_kmods : false 00:02:45.595 tests : false 00:02:45.595 00:02:45.595 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:45.854 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:46.113 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:46.113 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:46.113 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:46.113 [4/264] Linking static target lib/librte_kvargs.a 00:02:46.113 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:46.113 [6/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:46.113 [7/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:46.113 [8/264] Linking static target lib/librte_log.a 00:02:46.113 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:46.113 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:46.372 [11/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:46.372 [12/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:46.372 [13/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:46.372 [14/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.372 [15/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:46.372 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:46.631 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:46.631 [18/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:46.631 [19/264] Linking static target lib/librte_telemetry.a 00:02:46.890 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:46.890 [21/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:46.890 [22/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:46.890 [23/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:46.890 [24/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:46.890 [25/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.890 [26/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:46.890 [27/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:46.890 [28/264] Linking target lib/librte_log.so.24.0 00:02:46.890 [29/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:47.149 [30/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:47.149 [31/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:47.149 [32/264] Linking target lib/librte_kvargs.so.24.0 00:02:47.149 [33/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:47.409 [34/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:47.409 [35/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:47.409 [36/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.409 [37/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:47.409 [38/264] Linking target lib/librte_telemetry.so.24.0 00:02:47.409 [39/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:47.409 [40/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:47.409 [41/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:47.409 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:47.409 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:47.409 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:47.409 [45/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:47.409 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:47.409 [47/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:47.667 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:47.667 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:47.925 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:47.925 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:47.925 [52/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:47.925 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:47.925 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:47.925 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:47.925 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:47.925 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:47.925 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:47.925 [59/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:47.925 [60/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:47.925 [61/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:47.925 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:48.184 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:48.184 [64/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:48.184 [65/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:48.184 [66/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:48.443 [67/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:48.443 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:48.443 [69/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:48.443 [70/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:48.443 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:48.443 [72/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:48.443 [73/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:48.443 [74/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:48.443 [75/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:48.443 [76/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:48.443 [77/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:48.701 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:48.701 [79/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:48.701 [80/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:48.960 [81/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:48.960 [82/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:48.960 [83/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:48.960 [84/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:48.960 [85/264] Linking static target lib/librte_ring.a 00:02:48.960 [86/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:48.960 [87/264] Linking static target lib/librte_eal.a 00:02:49.219 [88/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:49.219 [89/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:49.219 [90/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:49.219 [91/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:49.219 [92/264] Linking static target lib/librte_mempool.a 00:02:49.219 [93/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:49.219 [94/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:49.219 [95/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.477 [96/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:49.477 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:49.477 [98/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:49.477 [99/264] Linking static target lib/librte_rcu.a 00:02:49.736 [100/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:49.736 [101/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:49.736 [102/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:49.736 [103/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:49.994 [104/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.994 [105/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:49.994 [106/264] Linking static target lib/librte_meter.a 00:02:49.994 [107/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:49.994 [108/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:49.994 [109/264] Linking static target lib/librte_net.a 00:02:50.254 [110/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.254 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:50.254 [112/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:50.254 [113/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.254 [114/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:50.254 [115/264] Linking static target lib/librte_mbuf.a 00:02:50.517 [116/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.517 [117/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:50.517 [118/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:50.776 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:50.776 [120/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:51.035 [121/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:51.035 [122/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:51.035 [123/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:51.035 [124/264] Linking static target lib/librte_pci.a 00:02:51.294 [125/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:51.294 [126/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:51.294 [127/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:51.294 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:51.294 [129/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.294 [130/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:51.294 [131/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:51.294 [132/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:51.294 [133/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:51.294 [134/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:51.294 [135/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:51.294 [136/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:51.553 [137/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.553 [138/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:51.553 [139/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:51.553 [140/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:51.553 [141/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:51.553 [142/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:51.553 [143/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:51.553 [144/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:51.553 [145/264] Linking static target lib/librte_cmdline.a 00:02:51.812 [146/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:51.812 [147/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:51.812 [148/264] Linking static target lib/librte_timer.a 00:02:51.812 [149/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:51.812 [150/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:52.070 [151/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:52.071 [152/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:52.071 [153/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:52.071 [154/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:52.071 [155/264] Linking static target lib/librte_compressdev.a 00:02:52.071 [156/264] Linking static target lib/librte_ethdev.a 00:02:52.329 [157/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:52.329 [158/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:52.329 [159/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:52.329 [160/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.329 [161/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:52.329 [162/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:52.329 [163/264] Linking static target lib/librte_hash.a 00:02:52.588 [164/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:52.588 [165/264] Linking static target lib/librte_dmadev.a 00:02:52.588 [166/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:52.588 [167/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:52.588 [168/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:52.846 [169/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.846 [170/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.846 [171/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.846 [172/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.846 [173/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:52.846 [174/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:53.104 [175/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.104 [176/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.364 [177/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.364 [178/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.364 [179/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:53.364 [180/264] Linking static target lib/librte_power.a 00:02:53.364 [181/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.364 [182/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:53.364 [183/264] Linking static target lib/librte_cryptodev.a 00:02:53.622 [184/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.622 [185/264] Linking static target lib/librte_security.a 00:02:53.622 [186/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.622 [187/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.622 [188/264] Linking static target lib/librte_reorder.a 00:02:53.622 [189/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:54.189 [190/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:54.189 [191/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.189 [192/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.189 [193/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.189 [194/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:54.448 [195/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.448 [196/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:54.448 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:54.448 [198/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:54.448 [199/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.448 [200/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:54.706 [201/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:54.706 [202/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:54.706 [203/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:54.706 [204/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:54.706 [205/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:54.706 [206/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:54.965 [207/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.965 [208/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:54.965 [209/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.965 [210/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:54.965 [211/264] Linking static target drivers/librte_bus_vdev.a 00:02:54.965 [212/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:54.965 [213/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.965 [214/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:54.965 [215/264] Linking static target drivers/librte_bus_pci.a 00:02:54.965 [216/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:54.965 [217/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:55.224 [218/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.224 [219/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:55.224 [220/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.224 [221/264] Linking static target drivers/librte_mempool_ring.a 00:02:55.224 [222/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:55.224 [223/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.789 [224/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:56.723 [225/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.723 [226/264] Linking target lib/librte_eal.so.24.0 00:02:56.723 [227/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:56.723 [228/264] Linking target lib/librte_meter.so.24.0 00:02:56.723 [229/264] Linking target lib/librte_ring.so.24.0 00:02:56.723 [230/264] Linking target lib/librte_pci.so.24.0 00:02:56.723 [231/264] Linking target lib/librte_timer.so.24.0 00:02:56.723 [232/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:56.723 [233/264] Linking target lib/librte_dmadev.so.24.0 00:02:56.723 [234/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:56.723 [235/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:56.723 [236/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:56.981 [237/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:56.981 [238/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:56.981 [239/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:56.981 [240/264] Linking target lib/librte_mempool.so.24.0 00:02:56.981 [241/264] Linking target lib/librte_rcu.so.24.0 00:02:56.981 [242/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:56.981 [243/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:56.981 [244/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:56.981 [245/264] Linking target lib/librte_mbuf.so.24.0 00:02:57.238 [246/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:57.238 [247/264] Linking target lib/librte_cryptodev.so.24.0 00:02:57.238 [248/264] Linking target lib/librte_net.so.24.0 00:02:57.238 [249/264] Linking target lib/librte_compressdev.so.24.0 00:02:57.238 [250/264] Linking target lib/librte_reorder.so.24.0 00:02:57.238 [251/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:57.238 [252/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:57.238 [253/264] Linking target lib/librte_hash.so.24.0 00:02:57.238 [254/264] Linking target lib/librte_cmdline.so.24.0 00:02:57.238 [255/264] Linking target lib/librte_security.so.24.0 00:02:57.495 [256/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:57.495 [257/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.756 [258/264] Linking target lib/librte_ethdev.so.24.0 00:02:57.756 [259/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:57.756 [260/264] Linking target lib/librte_power.so.24.0 00:02:58.691 [261/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:58.691 [262/264] Linking static target lib/librte_vhost.a 00:03:00.066 [263/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.066 [264/264] Linking target lib/librte_vhost.so.24.0 00:03:00.066 INFO: autodetecting backend as ninja 00:03:00.066 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:00.632 CC lib/log/log.o 00:03:00.632 CC lib/log/log_deprecated.o 00:03:00.632 CC lib/ut/ut.o 00:03:00.632 CC lib/log/log_flags.o 00:03:00.632 CC lib/ut_mock/mock.o 00:03:00.890 LIB libspdk_ut_mock.a 00:03:00.890 LIB libspdk_log.a 00:03:00.890 SO libspdk_ut_mock.so.5.0 00:03:00.890 LIB libspdk_ut.a 00:03:00.890 SO libspdk_log.so.6.1 00:03:00.890 SO libspdk_ut.so.1.0 00:03:00.890 SYMLINK libspdk_ut_mock.so 00:03:00.890 SYMLINK libspdk_ut.so 00:03:00.890 SYMLINK libspdk_log.so 00:03:01.148 CC lib/util/base64.o 00:03:01.148 CC lib/util/bit_array.o 00:03:01.148 CC lib/util/cpuset.o 00:03:01.148 CC lib/ioat/ioat.o 00:03:01.148 CC lib/util/crc32.o 00:03:01.148 CC lib/util/crc16.o 00:03:01.148 CC lib/util/crc32c.o 00:03:01.148 CC lib/dma/dma.o 00:03:01.148 CXX lib/trace_parser/trace.o 00:03:01.148 CC lib/vfio_user/host/vfio_user_pci.o 00:03:01.148 CC lib/vfio_user/host/vfio_user.o 00:03:01.148 CC lib/util/crc32_ieee.o 00:03:01.148 CC lib/util/crc64.o 00:03:01.148 CC lib/util/dif.o 00:03:01.148 LIB libspdk_dma.a 00:03:01.148 SO libspdk_dma.so.3.0 00:03:01.148 CC lib/util/fd.o 00:03:01.148 CC lib/util/file.o 00:03:01.148 SYMLINK libspdk_dma.so 00:03:01.148 CC lib/util/hexlify.o 00:03:01.148 CC lib/util/iov.o 00:03:01.148 CC lib/util/math.o 00:03:01.406 LIB libspdk_ioat.a 00:03:01.406 CC lib/util/pipe.o 00:03:01.406 SO libspdk_ioat.so.6.0 00:03:01.406 CC lib/util/strerror_tls.o 00:03:01.406 LIB libspdk_vfio_user.a 00:03:01.406 CC lib/util/string.o 00:03:01.406 SO libspdk_vfio_user.so.4.0 00:03:01.406 SYMLINK libspdk_ioat.so 00:03:01.407 CC lib/util/uuid.o 00:03:01.407 CC lib/util/fd_group.o 00:03:01.407 CC lib/util/xor.o 00:03:01.407 SYMLINK libspdk_vfio_user.so 00:03:01.407 CC lib/util/zipf.o 00:03:01.664 LIB libspdk_util.a 00:03:01.664 SO libspdk_util.so.8.0 00:03:01.922 SYMLINK libspdk_util.so 00:03:01.922 LIB libspdk_trace_parser.a 00:03:01.922 CC lib/json/json_parse.o 00:03:01.922 CC lib/json/json_util.o 00:03:01.922 CC lib/json/json_write.o 00:03:01.922 CC lib/env_dpdk/env.o 00:03:01.922 CC lib/env_dpdk/memory.o 00:03:01.922 CC lib/conf/conf.o 00:03:01.922 CC lib/vmd/vmd.o 00:03:01.922 CC lib/rdma/common.o 00:03:01.922 CC lib/idxd/idxd.o 00:03:01.922 SO libspdk_trace_parser.so.4.0 00:03:01.922 SYMLINK libspdk_trace_parser.so 00:03:01.922 CC lib/rdma/rdma_verbs.o 00:03:02.180 LIB libspdk_conf.a 00:03:02.180 CC lib/vmd/led.o 00:03:02.180 SO libspdk_conf.so.5.0 00:03:02.180 CC lib/idxd/idxd_user.o 00:03:02.180 CC lib/env_dpdk/pci.o 00:03:02.180 LIB libspdk_json.a 00:03:02.180 SYMLINK libspdk_conf.so 00:03:02.180 CC lib/env_dpdk/init.o 00:03:02.180 LIB libspdk_rdma.a 00:03:02.180 SO libspdk_json.so.5.1 00:03:02.180 SO libspdk_rdma.so.5.0 00:03:02.180 SYMLINK libspdk_rdma.so 00:03:02.180 CC lib/env_dpdk/threads.o 00:03:02.180 CC lib/env_dpdk/pci_ioat.o 00:03:02.180 SYMLINK libspdk_json.so 00:03:02.180 CC lib/idxd/idxd_kernel.o 00:03:02.439 CC lib/env_dpdk/pci_virtio.o 00:03:02.439 CC lib/env_dpdk/pci_vmd.o 00:03:02.439 CC lib/env_dpdk/pci_idxd.o 00:03:02.439 CC lib/env_dpdk/pci_event.o 00:03:02.440 LIB libspdk_idxd.a 00:03:02.440 SO libspdk_idxd.so.11.0 00:03:02.440 CC lib/env_dpdk/sigbus_handler.o 00:03:02.440 CC lib/env_dpdk/pci_dpdk.o 00:03:02.440 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:02.440 SYMLINK libspdk_idxd.so 00:03:02.440 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:02.440 LIB libspdk_vmd.a 00:03:02.699 CC lib/jsonrpc/jsonrpc_server.o 00:03:02.699 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:02.699 CC lib/jsonrpc/jsonrpc_client.o 00:03:02.699 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:02.699 SO libspdk_vmd.so.5.0 00:03:02.699 SYMLINK libspdk_vmd.so 00:03:02.699 LIB libspdk_jsonrpc.a 00:03:02.957 SO libspdk_jsonrpc.so.5.1 00:03:02.957 SYMLINK libspdk_jsonrpc.so 00:03:02.957 CC lib/rpc/rpc.o 00:03:03.215 LIB libspdk_rpc.a 00:03:03.215 SO libspdk_rpc.so.5.0 00:03:03.215 LIB libspdk_env_dpdk.a 00:03:03.215 SYMLINK libspdk_rpc.so 00:03:03.473 SO libspdk_env_dpdk.so.13.0 00:03:03.473 CC lib/trace/trace.o 00:03:03.473 CC lib/sock/sock.o 00:03:03.473 CC lib/trace/trace_flags.o 00:03:03.473 CC lib/sock/sock_rpc.o 00:03:03.473 CC lib/trace/trace_rpc.o 00:03:03.473 CC lib/notify/notify.o 00:03:03.473 CC lib/notify/notify_rpc.o 00:03:03.473 SYMLINK libspdk_env_dpdk.so 00:03:03.473 LIB libspdk_notify.a 00:03:03.473 SO libspdk_notify.so.5.0 00:03:03.732 LIB libspdk_trace.a 00:03:03.732 SYMLINK libspdk_notify.so 00:03:03.732 SO libspdk_trace.so.9.0 00:03:03.732 SYMLINK libspdk_trace.so 00:03:03.732 LIB libspdk_sock.a 00:03:03.732 SO libspdk_sock.so.8.0 00:03:03.732 CC lib/thread/thread.o 00:03:03.732 CC lib/thread/iobuf.o 00:03:03.990 SYMLINK libspdk_sock.so 00:03:03.990 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:03.990 CC lib/nvme/nvme_ctrlr.o 00:03:03.990 CC lib/nvme/nvme_fabric.o 00:03:03.990 CC lib/nvme/nvme_pcie_common.o 00:03:03.990 CC lib/nvme/nvme_ns_cmd.o 00:03:03.990 CC lib/nvme/nvme_ns.o 00:03:03.990 CC lib/nvme/nvme_qpair.o 00:03:03.990 CC lib/nvme/nvme_pcie.o 00:03:04.248 CC lib/nvme/nvme.o 00:03:04.505 CC lib/nvme/nvme_quirks.o 00:03:04.505 CC lib/nvme/nvme_transport.o 00:03:04.763 CC lib/nvme/nvme_discovery.o 00:03:04.763 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:04.763 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:04.763 CC lib/nvme/nvme_tcp.o 00:03:04.763 CC lib/nvme/nvme_opal.o 00:03:05.021 CC lib/nvme/nvme_io_msg.o 00:03:05.021 LIB libspdk_thread.a 00:03:05.021 SO libspdk_thread.so.9.0 00:03:05.021 CC lib/nvme/nvme_poll_group.o 00:03:05.021 SYMLINK libspdk_thread.so 00:03:05.021 CC lib/nvme/nvme_zns.o 00:03:05.021 CC lib/nvme/nvme_cuse.o 00:03:05.280 CC lib/nvme/nvme_vfio_user.o 00:03:05.280 CC lib/nvme/nvme_rdma.o 00:03:05.280 CC lib/accel/accel.o 00:03:05.280 CC lib/accel/accel_rpc.o 00:03:05.539 CC lib/accel/accel_sw.o 00:03:05.539 CC lib/blob/blobstore.o 00:03:05.539 CC lib/init/json_config.o 00:03:05.539 CC lib/init/subsystem.o 00:03:05.797 CC lib/virtio/virtio.o 00:03:05.797 CC lib/blob/request.o 00:03:05.797 CC lib/blob/zeroes.o 00:03:05.797 CC lib/blob/blob_bs_dev.o 00:03:05.797 CC lib/init/subsystem_rpc.o 00:03:05.797 CC lib/init/rpc.o 00:03:05.797 CC lib/virtio/virtio_vhost_user.o 00:03:06.057 CC lib/virtio/virtio_vfio_user.o 00:03:06.057 LIB libspdk_init.a 00:03:06.057 CC lib/virtio/virtio_pci.o 00:03:06.057 SO libspdk_init.so.4.0 00:03:06.057 SYMLINK libspdk_init.so 00:03:06.057 CC lib/event/app.o 00:03:06.057 CC lib/event/app_rpc.o 00:03:06.057 CC lib/event/reactor.o 00:03:06.057 CC lib/event/log_rpc.o 00:03:06.057 CC lib/event/scheduler_static.o 00:03:06.316 LIB libspdk_virtio.a 00:03:06.316 LIB libspdk_accel.a 00:03:06.316 SO libspdk_virtio.so.6.0 00:03:06.316 SO libspdk_accel.so.14.0 00:03:06.316 SYMLINK libspdk_virtio.so 00:03:06.574 LIB libspdk_nvme.a 00:03:06.574 SYMLINK libspdk_accel.so 00:03:06.574 CC lib/bdev/bdev_rpc.o 00:03:06.574 CC lib/bdev/part.o 00:03:06.574 CC lib/bdev/bdev.o 00:03:06.574 CC lib/bdev/bdev_zone.o 00:03:06.574 CC lib/bdev/scsi_nvme.o 00:03:06.574 LIB libspdk_event.a 00:03:06.574 SO libspdk_nvme.so.12.0 00:03:06.574 SO libspdk_event.so.12.0 00:03:06.832 SYMLINK libspdk_event.so 00:03:06.832 SYMLINK libspdk_nvme.so 00:03:08.735 LIB libspdk_blob.a 00:03:08.735 SO libspdk_blob.so.10.1 00:03:08.735 SYMLINK libspdk_blob.so 00:03:08.993 CC lib/lvol/lvol.o 00:03:08.993 CC lib/blobfs/blobfs.o 00:03:08.993 CC lib/blobfs/tree.o 00:03:09.294 LIB libspdk_bdev.a 00:03:09.294 SO libspdk_bdev.so.14.0 00:03:09.557 SYMLINK libspdk_bdev.so 00:03:09.557 CC lib/nvmf/ctrlr_discovery.o 00:03:09.557 CC lib/nvmf/ctrlr.o 00:03:09.557 CC lib/nvmf/ctrlr_bdev.o 00:03:09.557 CC lib/nvmf/subsystem.o 00:03:09.557 CC lib/nbd/nbd.o 00:03:09.557 CC lib/scsi/dev.o 00:03:09.557 CC lib/ublk/ublk.o 00:03:09.557 CC lib/ftl/ftl_core.o 00:03:09.816 LIB libspdk_blobfs.a 00:03:09.816 SO libspdk_blobfs.so.9.0 00:03:09.816 CC lib/scsi/lun.o 00:03:09.816 SYMLINK libspdk_blobfs.so 00:03:09.816 LIB libspdk_lvol.a 00:03:09.816 CC lib/ftl/ftl_init.o 00:03:09.816 SO libspdk_lvol.so.9.1 00:03:09.816 SYMLINK libspdk_lvol.so 00:03:09.816 CC lib/ftl/ftl_layout.o 00:03:09.816 CC lib/ftl/ftl_debug.o 00:03:09.816 CC lib/scsi/port.o 00:03:10.076 CC lib/nbd/nbd_rpc.o 00:03:10.076 CC lib/scsi/scsi.o 00:03:10.076 CC lib/nvmf/nvmf.o 00:03:10.076 CC lib/nvmf/nvmf_rpc.o 00:03:10.076 CC lib/ftl/ftl_io.o 00:03:10.076 LIB libspdk_nbd.a 00:03:10.076 CC lib/scsi/scsi_bdev.o 00:03:10.076 SO libspdk_nbd.so.6.0 00:03:10.076 CC lib/ublk/ublk_rpc.o 00:03:10.334 SYMLINK libspdk_nbd.so 00:03:10.334 CC lib/ftl/ftl_sb.o 00:03:10.334 CC lib/ftl/ftl_l2p.o 00:03:10.334 CC lib/ftl/ftl_l2p_flat.o 00:03:10.334 CC lib/ftl/ftl_nv_cache.o 00:03:10.334 LIB libspdk_ublk.a 00:03:10.334 SO libspdk_ublk.so.2.0 00:03:10.334 CC lib/scsi/scsi_pr.o 00:03:10.334 CC lib/scsi/scsi_rpc.o 00:03:10.334 CC lib/ftl/ftl_band.o 00:03:10.334 SYMLINK libspdk_ublk.so 00:03:10.334 CC lib/ftl/ftl_band_ops.o 00:03:10.592 CC lib/ftl/ftl_writer.o 00:03:10.592 CC lib/scsi/task.o 00:03:10.592 CC lib/nvmf/transport.o 00:03:10.592 CC lib/nvmf/tcp.o 00:03:10.592 CC lib/nvmf/rdma.o 00:03:10.592 CC lib/ftl/ftl_rq.o 00:03:10.850 CC lib/ftl/ftl_reloc.o 00:03:10.850 CC lib/ftl/ftl_l2p_cache.o 00:03:10.850 LIB libspdk_scsi.a 00:03:10.850 CC lib/ftl/ftl_p2l.o 00:03:10.850 SO libspdk_scsi.so.8.0 00:03:10.850 CC lib/ftl/mngt/ftl_mngt.o 00:03:10.850 SYMLINK libspdk_scsi.so 00:03:10.850 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:11.109 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:11.109 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:11.109 CC lib/iscsi/conn.o 00:03:11.109 CC lib/vhost/vhost.o 00:03:11.109 CC lib/iscsi/init_grp.o 00:03:11.109 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:11.109 CC lib/vhost/vhost_rpc.o 00:03:11.368 CC lib/vhost/vhost_scsi.o 00:03:11.368 CC lib/iscsi/iscsi.o 00:03:11.368 CC lib/iscsi/md5.o 00:03:11.368 CC lib/vhost/vhost_blk.o 00:03:11.368 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:11.368 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:11.626 CC lib/iscsi/param.o 00:03:11.626 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:11.626 CC lib/vhost/rte_vhost_user.o 00:03:11.626 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:11.626 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:11.884 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:11.884 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:11.884 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:11.884 CC lib/iscsi/portal_grp.o 00:03:11.884 CC lib/ftl/utils/ftl_conf.o 00:03:11.884 CC lib/ftl/utils/ftl_md.o 00:03:11.884 CC lib/ftl/utils/ftl_mempool.o 00:03:12.142 CC lib/ftl/utils/ftl_bitmap.o 00:03:12.142 CC lib/ftl/utils/ftl_property.o 00:03:12.142 CC lib/iscsi/tgt_node.o 00:03:12.142 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:12.142 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:12.142 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:12.142 CC lib/iscsi/iscsi_subsystem.o 00:03:12.401 CC lib/iscsi/iscsi_rpc.o 00:03:12.401 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:12.401 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:12.401 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:12.401 LIB libspdk_nvmf.a 00:03:12.401 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:12.401 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:12.401 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:12.401 CC lib/iscsi/task.o 00:03:12.401 SO libspdk_nvmf.so.17.0 00:03:12.401 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:12.401 CC lib/ftl/base/ftl_base_dev.o 00:03:12.401 CC lib/ftl/base/ftl_base_bdev.o 00:03:12.661 CC lib/ftl/ftl_trace.o 00:03:12.661 SYMLINK libspdk_nvmf.so 00:03:12.661 LIB libspdk_iscsi.a 00:03:12.661 LIB libspdk_vhost.a 00:03:12.661 SO libspdk_iscsi.so.7.0 00:03:12.661 SO libspdk_vhost.so.7.1 00:03:12.661 LIB libspdk_ftl.a 00:03:12.661 SYMLINK libspdk_vhost.so 00:03:12.919 SYMLINK libspdk_iscsi.so 00:03:12.919 SO libspdk_ftl.so.8.0 00:03:13.178 SYMLINK libspdk_ftl.so 00:03:13.178 CC module/env_dpdk/env_dpdk_rpc.o 00:03:13.178 CC module/accel/dsa/accel_dsa.o 00:03:13.178 CC module/blob/bdev/blob_bdev.o 00:03:13.178 CC module/accel/ioat/accel_ioat.o 00:03:13.178 CC module/accel/error/accel_error.o 00:03:13.178 CC module/accel/iaa/accel_iaa.o 00:03:13.178 CC module/sock/posix/posix.o 00:03:13.178 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:13.178 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:13.178 CC module/scheduler/gscheduler/gscheduler.o 00:03:13.483 LIB libspdk_env_dpdk_rpc.a 00:03:13.483 SO libspdk_env_dpdk_rpc.so.5.0 00:03:13.483 SYMLINK libspdk_env_dpdk_rpc.so 00:03:13.483 CC module/accel/iaa/accel_iaa_rpc.o 00:03:13.483 LIB libspdk_scheduler_dpdk_governor.a 00:03:13.483 CC module/accel/ioat/accel_ioat_rpc.o 00:03:13.483 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:13.483 LIB libspdk_scheduler_gscheduler.a 00:03:13.483 LIB libspdk_scheduler_dynamic.a 00:03:13.483 CC module/accel/error/accel_error_rpc.o 00:03:13.483 CC module/accel/dsa/accel_dsa_rpc.o 00:03:13.483 SO libspdk_scheduler_gscheduler.so.3.0 00:03:13.483 SO libspdk_scheduler_dynamic.so.3.0 00:03:13.483 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:13.483 LIB libspdk_accel_iaa.a 00:03:13.483 SYMLINK libspdk_scheduler_gscheduler.so 00:03:13.483 SO libspdk_accel_iaa.so.2.0 00:03:13.483 SYMLINK libspdk_scheduler_dynamic.so 00:03:13.483 SYMLINK libspdk_accel_iaa.so 00:03:13.483 LIB libspdk_blob_bdev.a 00:03:13.483 LIB libspdk_accel_error.a 00:03:13.483 LIB libspdk_accel_ioat.a 00:03:13.483 LIB libspdk_accel_dsa.a 00:03:13.483 SO libspdk_blob_bdev.so.10.1 00:03:13.483 SO libspdk_accel_ioat.so.5.0 00:03:13.483 SO libspdk_accel_error.so.1.0 00:03:13.483 SO libspdk_accel_dsa.so.4.0 00:03:13.483 SYMLINK libspdk_accel_ioat.so 00:03:13.483 SYMLINK libspdk_blob_bdev.so 00:03:13.483 SYMLINK libspdk_accel_error.so 00:03:13.747 SYMLINK libspdk_accel_dsa.so 00:03:13.747 CC module/blobfs/bdev/blobfs_bdev.o 00:03:13.747 CC module/bdev/lvol/vbdev_lvol.o 00:03:13.747 CC module/bdev/nvme/bdev_nvme.o 00:03:13.747 CC module/bdev/error/vbdev_error.o 00:03:13.747 CC module/bdev/null/bdev_null.o 00:03:13.747 CC module/bdev/delay/vbdev_delay.o 00:03:13.747 CC module/bdev/malloc/bdev_malloc.o 00:03:13.747 CC module/bdev/passthru/vbdev_passthru.o 00:03:13.747 CC module/bdev/gpt/gpt.o 00:03:13.747 LIB libspdk_sock_posix.a 00:03:14.005 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:14.005 SO libspdk_sock_posix.so.5.0 00:03:14.005 CC module/bdev/null/bdev_null_rpc.o 00:03:14.005 SYMLINK libspdk_sock_posix.so 00:03:14.005 CC module/bdev/gpt/vbdev_gpt.o 00:03:14.005 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:14.005 CC module/bdev/error/vbdev_error_rpc.o 00:03:14.005 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:14.005 LIB libspdk_blobfs_bdev.a 00:03:14.005 SO libspdk_blobfs_bdev.so.5.0 00:03:14.005 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:14.005 LIB libspdk_bdev_malloc.a 00:03:14.005 LIB libspdk_bdev_null.a 00:03:14.005 SYMLINK libspdk_blobfs_bdev.so 00:03:14.263 SO libspdk_bdev_malloc.so.5.0 00:03:14.263 LIB libspdk_bdev_error.a 00:03:14.263 SO libspdk_bdev_null.so.5.0 00:03:14.263 SO libspdk_bdev_error.so.5.0 00:03:14.263 CC module/bdev/raid/bdev_raid.o 00:03:14.263 LIB libspdk_bdev_delay.a 00:03:14.263 SYMLINK libspdk_bdev_malloc.so 00:03:14.263 SO libspdk_bdev_delay.so.5.0 00:03:14.263 CC module/bdev/raid/bdev_raid_rpc.o 00:03:14.263 SYMLINK libspdk_bdev_null.so 00:03:14.263 CC module/bdev/split/vbdev_split.o 00:03:14.263 SYMLINK libspdk_bdev_error.so 00:03:14.263 CC module/bdev/split/vbdev_split_rpc.o 00:03:14.263 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:14.263 SYMLINK libspdk_bdev_delay.so 00:03:14.263 LIB libspdk_bdev_passthru.a 00:03:14.263 LIB libspdk_bdev_gpt.a 00:03:14.263 SO libspdk_bdev_passthru.so.5.0 00:03:14.263 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:14.263 SO libspdk_bdev_gpt.so.5.0 00:03:14.263 SYMLINK libspdk_bdev_passthru.so 00:03:14.263 CC module/bdev/xnvme/bdev_xnvme.o 00:03:14.263 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:14.263 CC module/bdev/raid/bdev_raid_sb.o 00:03:14.263 SYMLINK libspdk_bdev_gpt.so 00:03:14.263 CC module/bdev/raid/raid0.o 00:03:14.263 LIB libspdk_bdev_split.a 00:03:14.263 SO libspdk_bdev_split.so.5.0 00:03:14.521 SYMLINK libspdk_bdev_split.so 00:03:14.521 CC module/bdev/aio/bdev_aio.o 00:03:14.521 CC module/bdev/aio/bdev_aio_rpc.o 00:03:14.521 CC module/bdev/ftl/bdev_ftl.o 00:03:14.521 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:14.521 LIB libspdk_bdev_lvol.a 00:03:14.521 SO libspdk_bdev_lvol.so.5.0 00:03:14.521 CC module/bdev/raid/raid1.o 00:03:14.521 SYMLINK libspdk_bdev_lvol.so 00:03:14.521 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:14.521 LIB libspdk_bdev_zone_block.a 00:03:14.521 CC module/bdev/iscsi/bdev_iscsi.o 00:03:14.779 SO libspdk_bdev_zone_block.so.5.0 00:03:14.779 LIB libspdk_bdev_xnvme.a 00:03:14.779 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:14.779 SO libspdk_bdev_xnvme.so.2.0 00:03:14.779 LIB libspdk_bdev_aio.a 00:03:14.779 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:14.779 SO libspdk_bdev_aio.so.5.0 00:03:14.779 SYMLINK libspdk_bdev_zone_block.so 00:03:14.779 SYMLINK libspdk_bdev_xnvme.so 00:03:14.779 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:14.779 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:14.779 SYMLINK libspdk_bdev_aio.so 00:03:14.779 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:14.779 CC module/bdev/raid/concat.o 00:03:14.779 LIB libspdk_bdev_ftl.a 00:03:14.779 SO libspdk_bdev_ftl.so.5.0 00:03:14.779 SYMLINK libspdk_bdev_ftl.so 00:03:14.779 CC module/bdev/nvme/nvme_rpc.o 00:03:14.779 CC module/bdev/nvme/bdev_mdns_client.o 00:03:14.779 CC module/bdev/nvme/vbdev_opal.o 00:03:15.038 LIB libspdk_bdev_iscsi.a 00:03:15.038 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:15.038 SO libspdk_bdev_iscsi.so.5.0 00:03:15.038 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:15.038 SYMLINK libspdk_bdev_iscsi.so 00:03:15.038 LIB libspdk_bdev_raid.a 00:03:15.038 SO libspdk_bdev_raid.so.5.0 00:03:15.038 SYMLINK libspdk_bdev_raid.so 00:03:15.038 LIB libspdk_bdev_virtio.a 00:03:15.038 SO libspdk_bdev_virtio.so.5.0 00:03:15.298 SYMLINK libspdk_bdev_virtio.so 00:03:16.238 LIB libspdk_bdev_nvme.a 00:03:16.238 SO libspdk_bdev_nvme.so.6.0 00:03:16.238 SYMLINK libspdk_bdev_nvme.so 00:03:16.498 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:16.498 CC module/event/subsystems/iobuf/iobuf.o 00:03:16.498 CC module/event/subsystems/scheduler/scheduler.o 00:03:16.498 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:16.498 CC module/event/subsystems/vmd/vmd.o 00:03:16.498 CC module/event/subsystems/sock/sock.o 00:03:16.498 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:16.498 LIB libspdk_event_scheduler.a 00:03:16.498 LIB libspdk_event_vhost_blk.a 00:03:16.498 LIB libspdk_event_sock.a 00:03:16.498 LIB libspdk_event_vmd.a 00:03:16.498 SO libspdk_event_vhost_blk.so.2.0 00:03:16.498 SO libspdk_event_scheduler.so.3.0 00:03:16.498 LIB libspdk_event_iobuf.a 00:03:16.498 SO libspdk_event_sock.so.4.0 00:03:16.498 SO libspdk_event_vmd.so.5.0 00:03:16.498 SO libspdk_event_iobuf.so.2.0 00:03:16.498 SYMLINK libspdk_event_vhost_blk.so 00:03:16.498 SYMLINK libspdk_event_scheduler.so 00:03:16.498 SYMLINK libspdk_event_sock.so 00:03:16.498 SYMLINK libspdk_event_vmd.so 00:03:16.757 SYMLINK libspdk_event_iobuf.so 00:03:16.757 CC module/event/subsystems/accel/accel.o 00:03:17.017 LIB libspdk_event_accel.a 00:03:17.017 SO libspdk_event_accel.so.5.0 00:03:17.017 SYMLINK libspdk_event_accel.so 00:03:17.017 CC module/event/subsystems/bdev/bdev.o 00:03:17.276 LIB libspdk_event_bdev.a 00:03:17.276 SO libspdk_event_bdev.so.5.0 00:03:17.276 SYMLINK libspdk_event_bdev.so 00:03:17.534 CC module/event/subsystems/scsi/scsi.o 00:03:17.534 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:17.534 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:17.534 CC module/event/subsystems/ublk/ublk.o 00:03:17.534 CC module/event/subsystems/nbd/nbd.o 00:03:17.534 LIB libspdk_event_nbd.a 00:03:17.534 LIB libspdk_event_scsi.a 00:03:17.534 SO libspdk_event_nbd.so.5.0 00:03:17.534 SO libspdk_event_scsi.so.5.0 00:03:17.534 LIB libspdk_event_ublk.a 00:03:17.534 SO libspdk_event_ublk.so.2.0 00:03:17.534 SYMLINK libspdk_event_nbd.so 00:03:17.534 SYMLINK libspdk_event_scsi.so 00:03:17.534 LIB libspdk_event_nvmf.a 00:03:17.793 SYMLINK libspdk_event_ublk.so 00:03:17.793 SO libspdk_event_nvmf.so.5.0 00:03:17.793 SYMLINK libspdk_event_nvmf.so 00:03:17.793 CC module/event/subsystems/iscsi/iscsi.o 00:03:17.793 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:17.793 LIB libspdk_event_vhost_scsi.a 00:03:17.793 LIB libspdk_event_iscsi.a 00:03:17.793 SO libspdk_event_vhost_scsi.so.2.0 00:03:17.793 SO libspdk_event_iscsi.so.5.0 00:03:18.051 SYMLINK libspdk_event_vhost_scsi.so 00:03:18.051 SYMLINK libspdk_event_iscsi.so 00:03:18.051 SO libspdk.so.5.0 00:03:18.051 SYMLINK libspdk.so 00:03:18.051 CXX app/trace/trace.o 00:03:18.309 CC examples/accel/perf/accel_perf.o 00:03:18.309 CC examples/sock/hello_world/hello_sock.o 00:03:18.309 CC examples/vmd/lsvmd/lsvmd.o 00:03:18.309 CC examples/ioat/perf/perf.o 00:03:18.309 CC examples/nvme/hello_world/hello_world.o 00:03:18.309 CC examples/nvmf/nvmf/nvmf.o 00:03:18.309 CC examples/blob/hello_world/hello_blob.o 00:03:18.309 CC examples/bdev/hello_world/hello_bdev.o 00:03:18.309 CC test/accel/dif/dif.o 00:03:18.309 LINK lsvmd 00:03:18.309 LINK ioat_perf 00:03:18.309 LINK hello_blob 00:03:18.309 LINK hello_bdev 00:03:18.309 LINK hello_world 00:03:18.309 CC examples/vmd/led/led.o 00:03:18.567 LINK hello_sock 00:03:18.567 LINK nvmf 00:03:18.567 LINK spdk_trace 00:03:18.567 LINK led 00:03:18.567 CC examples/ioat/verify/verify.o 00:03:18.567 CC examples/nvme/reconnect/reconnect.o 00:03:18.567 LINK dif 00:03:18.567 CC examples/blob/cli/blobcli.o 00:03:18.567 LINK accel_perf 00:03:18.567 CC examples/bdev/bdevperf/bdevperf.o 00:03:18.567 CC app/trace_record/trace_record.o 00:03:18.567 CC examples/util/zipf/zipf.o 00:03:18.567 LINK verify 00:03:18.857 CC app/nvmf_tgt/nvmf_main.o 00:03:18.857 CC test/app/bdev_svc/bdev_svc.o 00:03:18.857 LINK zipf 00:03:18.857 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:18.857 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.857 LINK spdk_trace_record 00:03:18.857 LINK nvmf_tgt 00:03:18.857 CC app/spdk_tgt/spdk_tgt.o 00:03:18.857 LINK bdev_svc 00:03:18.857 LINK reconnect 00:03:18.857 CC app/spdk_lspci/spdk_lspci.o 00:03:19.119 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:19.119 LINK iscsi_tgt 00:03:19.119 LINK spdk_lspci 00:03:19.119 LINK spdk_tgt 00:03:19.119 LINK blobcli 00:03:19.119 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:19.119 CC examples/idxd/perf/perf.o 00:03:19.119 LINK nvme_fuzz 00:03:19.119 CC examples/thread/thread/thread_ex.o 00:03:19.119 CC examples/nvme/arbitration/arbitration.o 00:03:19.119 LINK bdevperf 00:03:19.119 CC app/spdk_nvme_perf/perf.o 00:03:19.119 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:19.378 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:19.378 CC examples/nvme/hotplug/hotplug.o 00:03:19.378 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:19.378 LINK thread 00:03:19.378 LINK interrupt_tgt 00:03:19.378 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:19.378 LINK idxd_perf 00:03:19.378 LINK arbitration 00:03:19.378 LINK hotplug 00:03:19.636 CC app/spdk_nvme_identify/identify.o 00:03:19.636 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.636 LINK cmb_copy 00:03:19.636 CC app/spdk_top/spdk_top.o 00:03:19.636 LINK nvme_manage 00:03:19.636 LINK vhost_fuzz 00:03:19.636 CC app/vhost/vhost.o 00:03:19.636 CC app/spdk_dd/spdk_dd.o 00:03:19.636 LINK spdk_nvme_discover 00:03:19.636 CC examples/nvme/abort/abort.o 00:03:19.636 LINK vhost 00:03:19.893 CC app/fio/nvme/fio_plugin.o 00:03:19.893 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:19.894 LINK spdk_nvme_perf 00:03:19.894 CC app/fio/bdev/fio_plugin.o 00:03:19.894 LINK pmr_persistence 00:03:19.894 CC test/app/histogram_perf/histogram_perf.o 00:03:19.894 LINK spdk_dd 00:03:20.151 CC test/app/jsoncat/jsoncat.o 00:03:20.151 LINK histogram_perf 00:03:20.151 LINK abort 00:03:20.151 CC test/bdev/bdevio/bdevio.o 00:03:20.151 LINK spdk_nvme_identify 00:03:20.151 LINK jsoncat 00:03:20.151 CC test/app/stub/stub.o 00:03:20.151 CC test/blobfs/mkfs/mkfs.o 00:03:20.151 LINK spdk_bdev 00:03:20.410 TEST_HEADER include/spdk/accel.h 00:03:20.410 TEST_HEADER include/spdk/accel_module.h 00:03:20.410 TEST_HEADER include/spdk/assert.h 00:03:20.410 TEST_HEADER include/spdk/barrier.h 00:03:20.410 TEST_HEADER include/spdk/base64.h 00:03:20.410 TEST_HEADER include/spdk/bdev.h 00:03:20.410 LINK spdk_top 00:03:20.410 TEST_HEADER include/spdk/bdev_module.h 00:03:20.410 TEST_HEADER include/spdk/bdev_zone.h 00:03:20.410 LINK spdk_nvme 00:03:20.410 TEST_HEADER include/spdk/bit_array.h 00:03:20.410 TEST_HEADER include/spdk/bit_pool.h 00:03:20.410 TEST_HEADER include/spdk/blob_bdev.h 00:03:20.410 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:20.410 TEST_HEADER include/spdk/blobfs.h 00:03:20.410 TEST_HEADER include/spdk/blob.h 00:03:20.410 TEST_HEADER include/spdk/conf.h 00:03:20.410 TEST_HEADER include/spdk/config.h 00:03:20.410 TEST_HEADER include/spdk/cpuset.h 00:03:20.410 TEST_HEADER include/spdk/crc16.h 00:03:20.410 TEST_HEADER include/spdk/crc32.h 00:03:20.410 TEST_HEADER include/spdk/crc64.h 00:03:20.410 TEST_HEADER include/spdk/dif.h 00:03:20.410 TEST_HEADER include/spdk/dma.h 00:03:20.410 LINK stub 00:03:20.410 TEST_HEADER include/spdk/endian.h 00:03:20.410 TEST_HEADER include/spdk/env_dpdk.h 00:03:20.410 TEST_HEADER include/spdk/env.h 00:03:20.410 TEST_HEADER include/spdk/event.h 00:03:20.410 TEST_HEADER include/spdk/fd_group.h 00:03:20.410 TEST_HEADER include/spdk/fd.h 00:03:20.410 TEST_HEADER include/spdk/file.h 00:03:20.410 TEST_HEADER include/spdk/ftl.h 00:03:20.410 TEST_HEADER include/spdk/gpt_spec.h 00:03:20.410 TEST_HEADER include/spdk/hexlify.h 00:03:20.410 TEST_HEADER include/spdk/histogram_data.h 00:03:20.410 TEST_HEADER include/spdk/idxd.h 00:03:20.410 TEST_HEADER include/spdk/idxd_spec.h 00:03:20.410 CC test/dma/test_dma/test_dma.o 00:03:20.410 TEST_HEADER include/spdk/init.h 00:03:20.410 TEST_HEADER include/spdk/ioat.h 00:03:20.410 TEST_HEADER include/spdk/ioat_spec.h 00:03:20.410 TEST_HEADER include/spdk/iscsi_spec.h 00:03:20.410 TEST_HEADER include/spdk/json.h 00:03:20.410 TEST_HEADER include/spdk/jsonrpc.h 00:03:20.410 TEST_HEADER include/spdk/likely.h 00:03:20.410 TEST_HEADER include/spdk/log.h 00:03:20.410 TEST_HEADER include/spdk/lvol.h 00:03:20.410 TEST_HEADER include/spdk/memory.h 00:03:20.410 TEST_HEADER include/spdk/mmio.h 00:03:20.410 TEST_HEADER include/spdk/nbd.h 00:03:20.410 TEST_HEADER include/spdk/notify.h 00:03:20.410 TEST_HEADER include/spdk/nvme.h 00:03:20.410 TEST_HEADER include/spdk/nvme_intel.h 00:03:20.410 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:20.410 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:20.410 TEST_HEADER include/spdk/nvme_spec.h 00:03:20.410 TEST_HEADER include/spdk/nvme_zns.h 00:03:20.410 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:20.410 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:20.410 TEST_HEADER include/spdk/nvmf.h 00:03:20.410 TEST_HEADER include/spdk/nvmf_spec.h 00:03:20.410 TEST_HEADER include/spdk/nvmf_transport.h 00:03:20.410 TEST_HEADER include/spdk/opal.h 00:03:20.410 TEST_HEADER include/spdk/opal_spec.h 00:03:20.410 TEST_HEADER include/spdk/pci_ids.h 00:03:20.410 CC test/env/vtophys/vtophys.o 00:03:20.410 TEST_HEADER include/spdk/pipe.h 00:03:20.410 TEST_HEADER include/spdk/queue.h 00:03:20.410 LINK mkfs 00:03:20.410 TEST_HEADER include/spdk/reduce.h 00:03:20.410 TEST_HEADER include/spdk/rpc.h 00:03:20.410 CC test/env/mem_callbacks/mem_callbacks.o 00:03:20.410 TEST_HEADER include/spdk/scheduler.h 00:03:20.410 TEST_HEADER include/spdk/scsi.h 00:03:20.410 TEST_HEADER include/spdk/scsi_spec.h 00:03:20.410 TEST_HEADER include/spdk/sock.h 00:03:20.410 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:20.410 TEST_HEADER include/spdk/stdinc.h 00:03:20.410 TEST_HEADER include/spdk/string.h 00:03:20.410 TEST_HEADER include/spdk/thread.h 00:03:20.410 TEST_HEADER include/spdk/trace.h 00:03:20.410 TEST_HEADER include/spdk/trace_parser.h 00:03:20.410 TEST_HEADER include/spdk/tree.h 00:03:20.410 TEST_HEADER include/spdk/ublk.h 00:03:20.410 TEST_HEADER include/spdk/util.h 00:03:20.410 TEST_HEADER include/spdk/uuid.h 00:03:20.410 TEST_HEADER include/spdk/version.h 00:03:20.410 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:20.410 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:20.410 TEST_HEADER include/spdk/vhost.h 00:03:20.410 TEST_HEADER include/spdk/vmd.h 00:03:20.410 TEST_HEADER include/spdk/xor.h 00:03:20.410 TEST_HEADER include/spdk/zipf.h 00:03:20.410 CXX test/cpp_headers/accel.o 00:03:20.410 LINK bdevio 00:03:20.410 CC test/env/memory/memory_ut.o 00:03:20.410 CC test/env/pci/pci_ut.o 00:03:20.669 LINK vtophys 00:03:20.669 LINK env_dpdk_post_init 00:03:20.669 CXX test/cpp_headers/accel_module.o 00:03:20.669 CC test/event/event_perf/event_perf.o 00:03:20.669 LINK iscsi_fuzz 00:03:20.669 LINK test_dma 00:03:20.669 CXX test/cpp_headers/assert.o 00:03:20.669 CC test/rpc_client/rpc_client_test.o 00:03:20.669 LINK event_perf 00:03:20.669 CC test/nvme/aer/aer.o 00:03:20.669 CC test/lvol/esnap/esnap.o 00:03:20.669 LINK pci_ut 00:03:20.926 CXX test/cpp_headers/barrier.o 00:03:20.926 LINK mem_callbacks 00:03:20.926 CC test/event/reactor/reactor.o 00:03:20.926 CC test/thread/poller_perf/poller_perf.o 00:03:20.926 LINK rpc_client_test 00:03:20.926 CXX test/cpp_headers/base64.o 00:03:20.926 CC test/event/reactor_perf/reactor_perf.o 00:03:20.926 CXX test/cpp_headers/bdev.o 00:03:20.926 CXX test/cpp_headers/bdev_module.o 00:03:20.926 LINK poller_perf 00:03:20.926 LINK reactor 00:03:20.926 LINK aer 00:03:20.926 CXX test/cpp_headers/bdev_zone.o 00:03:20.926 LINK reactor_perf 00:03:21.184 CXX test/cpp_headers/bit_array.o 00:03:21.184 CC test/nvme/reset/reset.o 00:03:21.184 CXX test/cpp_headers/bit_pool.o 00:03:21.184 CC test/event/app_repeat/app_repeat.o 00:03:21.184 CXX test/cpp_headers/blob_bdev.o 00:03:21.184 CXX test/cpp_headers/blobfs_bdev.o 00:03:21.184 CC test/nvme/sgl/sgl.o 00:03:21.184 CC test/event/scheduler/scheduler.o 00:03:21.184 CC test/nvme/e2edp/nvme_dp.o 00:03:21.184 LINK app_repeat 00:03:21.184 CXX test/cpp_headers/blobfs.o 00:03:21.184 LINK memory_ut 00:03:21.184 CXX test/cpp_headers/blob.o 00:03:21.184 CXX test/cpp_headers/conf.o 00:03:21.442 CXX test/cpp_headers/config.o 00:03:21.442 LINK reset 00:03:21.442 LINK scheduler 00:03:21.442 CXX test/cpp_headers/cpuset.o 00:03:21.442 CC test/nvme/overhead/overhead.o 00:03:21.442 CXX test/cpp_headers/crc16.o 00:03:21.442 CXX test/cpp_headers/crc32.o 00:03:21.442 LINK sgl 00:03:21.442 CXX test/cpp_headers/crc64.o 00:03:21.442 LINK nvme_dp 00:03:21.442 CC test/nvme/err_injection/err_injection.o 00:03:21.442 CXX test/cpp_headers/dif.o 00:03:21.442 CXX test/cpp_headers/dma.o 00:03:21.442 CXX test/cpp_headers/endian.o 00:03:21.442 CC test/nvme/startup/startup.o 00:03:21.700 CC test/nvme/reserve/reserve.o 00:03:21.700 CC test/nvme/simple_copy/simple_copy.o 00:03:21.700 CC test/nvme/connect_stress/connect_stress.o 00:03:21.700 LINK err_injection 00:03:21.700 LINK overhead 00:03:21.700 CC test/nvme/boot_partition/boot_partition.o 00:03:21.700 CXX test/cpp_headers/env_dpdk.o 00:03:21.700 CC test/nvme/compliance/nvme_compliance.o 00:03:21.700 LINK startup 00:03:21.700 LINK connect_stress 00:03:21.700 LINK reserve 00:03:21.700 CC test/nvme/fused_ordering/fused_ordering.o 00:03:21.700 LINK simple_copy 00:03:21.700 LINK boot_partition 00:03:21.700 CXX test/cpp_headers/env.o 00:03:21.958 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:21.958 CC test/nvme/fdp/fdp.o 00:03:21.958 CXX test/cpp_headers/event.o 00:03:21.958 CXX test/cpp_headers/fd_group.o 00:03:21.958 CC test/nvme/cuse/cuse.o 00:03:21.958 CXX test/cpp_headers/fd.o 00:03:21.958 CXX test/cpp_headers/file.o 00:03:21.958 LINK fused_ordering 00:03:21.958 LINK doorbell_aers 00:03:21.958 LINK nvme_compliance 00:03:21.958 CXX test/cpp_headers/ftl.o 00:03:21.958 CXX test/cpp_headers/gpt_spec.o 00:03:21.958 CXX test/cpp_headers/hexlify.o 00:03:21.958 CXX test/cpp_headers/histogram_data.o 00:03:22.216 CXX test/cpp_headers/idxd.o 00:03:22.216 CXX test/cpp_headers/idxd_spec.o 00:03:22.216 CXX test/cpp_headers/init.o 00:03:22.216 CXX test/cpp_headers/ioat.o 00:03:22.216 LINK fdp 00:03:22.216 CXX test/cpp_headers/ioat_spec.o 00:03:22.216 CXX test/cpp_headers/iscsi_spec.o 00:03:22.216 CXX test/cpp_headers/json.o 00:03:22.216 CXX test/cpp_headers/jsonrpc.o 00:03:22.216 CXX test/cpp_headers/likely.o 00:03:22.216 CXX test/cpp_headers/log.o 00:03:22.216 CXX test/cpp_headers/lvol.o 00:03:22.216 CXX test/cpp_headers/memory.o 00:03:22.216 CXX test/cpp_headers/mmio.o 00:03:22.216 CXX test/cpp_headers/nbd.o 00:03:22.216 CXX test/cpp_headers/notify.o 00:03:22.216 CXX test/cpp_headers/nvme.o 00:03:22.216 CXX test/cpp_headers/nvme_intel.o 00:03:22.475 CXX test/cpp_headers/nvme_ocssd.o 00:03:22.475 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:22.475 CXX test/cpp_headers/nvme_spec.o 00:03:22.475 CXX test/cpp_headers/nvme_zns.o 00:03:22.475 CXX test/cpp_headers/nvmf_cmd.o 00:03:22.475 CXX test/cpp_headers/nvmf.o 00:03:22.475 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:22.475 CXX test/cpp_headers/nvmf_spec.o 00:03:22.475 CXX test/cpp_headers/nvmf_transport.o 00:03:22.475 CXX test/cpp_headers/opal.o 00:03:22.475 CXX test/cpp_headers/opal_spec.o 00:03:22.475 CXX test/cpp_headers/pci_ids.o 00:03:22.475 CXX test/cpp_headers/pipe.o 00:03:22.475 CXX test/cpp_headers/queue.o 00:03:22.733 CXX test/cpp_headers/reduce.o 00:03:22.733 LINK cuse 00:03:22.733 CXX test/cpp_headers/rpc.o 00:03:22.733 CXX test/cpp_headers/scheduler.o 00:03:22.733 CXX test/cpp_headers/scsi.o 00:03:22.733 CXX test/cpp_headers/scsi_spec.o 00:03:22.733 CXX test/cpp_headers/stdinc.o 00:03:22.733 CXX test/cpp_headers/sock.o 00:03:22.733 CXX test/cpp_headers/string.o 00:03:22.733 CXX test/cpp_headers/thread.o 00:03:22.733 CXX test/cpp_headers/trace.o 00:03:22.733 CXX test/cpp_headers/trace_parser.o 00:03:22.733 CXX test/cpp_headers/tree.o 00:03:22.733 CXX test/cpp_headers/ublk.o 00:03:22.733 CXX test/cpp_headers/util.o 00:03:22.733 CXX test/cpp_headers/uuid.o 00:03:22.733 CXX test/cpp_headers/version.o 00:03:22.733 CXX test/cpp_headers/vfio_user_pci.o 00:03:22.733 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.733 CXX test/cpp_headers/vhost.o 00:03:22.733 CXX test/cpp_headers/vmd.o 00:03:22.991 CXX test/cpp_headers/xor.o 00:03:22.991 CXX test/cpp_headers/zipf.o 00:03:24.897 LINK esnap 00:03:24.897 00:03:24.897 real 0m49.410s 00:03:24.897 user 4m51.917s 00:03:24.897 sys 1m1.260s 00:03:24.897 16:12:44 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:24.897 16:12:44 -- common/autotest_common.sh@10 -- $ set +x 00:03:24.897 ************************************ 00:03:24.897 END TEST make 00:03:24.897 ************************************ 00:03:24.897 16:12:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:24.897 16:12:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:24.897 16:12:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:25.158 16:12:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:25.158 16:12:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:25.158 16:12:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:25.158 16:12:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:25.158 16:12:44 -- scripts/common.sh@335 -- # IFS=.-: 00:03:25.158 16:12:44 -- scripts/common.sh@335 -- # read -ra ver1 00:03:25.158 16:12:44 -- scripts/common.sh@336 -- # IFS=.-: 00:03:25.158 16:12:44 -- scripts/common.sh@336 -- # read -ra ver2 00:03:25.158 16:12:44 -- scripts/common.sh@337 -- # local 'op=<' 00:03:25.158 16:12:44 -- scripts/common.sh@339 -- # ver1_l=2 00:03:25.158 16:12:44 -- scripts/common.sh@340 -- # ver2_l=1 00:03:25.158 16:12:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:25.158 16:12:44 -- scripts/common.sh@343 -- # case "$op" in 00:03:25.158 16:12:44 -- scripts/common.sh@344 -- # : 1 00:03:25.158 16:12:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:25.158 16:12:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:25.158 16:12:44 -- scripts/common.sh@364 -- # decimal 1 00:03:25.158 16:12:44 -- scripts/common.sh@352 -- # local d=1 00:03:25.158 16:12:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:25.158 16:12:44 -- scripts/common.sh@354 -- # echo 1 00:03:25.158 16:12:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:25.158 16:12:44 -- scripts/common.sh@365 -- # decimal 2 00:03:25.158 16:12:44 -- scripts/common.sh@352 -- # local d=2 00:03:25.158 16:12:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:25.158 16:12:44 -- scripts/common.sh@354 -- # echo 2 00:03:25.158 16:12:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:25.158 16:12:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:25.158 16:12:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:25.158 16:12:44 -- scripts/common.sh@367 -- # return 0 00:03:25.158 16:12:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:25.158 16:12:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:25.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.158 --rc genhtml_branch_coverage=1 00:03:25.158 --rc genhtml_function_coverage=1 00:03:25.158 --rc genhtml_legend=1 00:03:25.158 --rc geninfo_all_blocks=1 00:03:25.158 --rc geninfo_unexecuted_blocks=1 00:03:25.158 00:03:25.158 ' 00:03:25.158 16:12:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:25.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.158 --rc genhtml_branch_coverage=1 00:03:25.158 --rc genhtml_function_coverage=1 00:03:25.158 --rc genhtml_legend=1 00:03:25.158 --rc geninfo_all_blocks=1 00:03:25.158 --rc geninfo_unexecuted_blocks=1 00:03:25.158 00:03:25.158 ' 00:03:25.158 16:12:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:25.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.158 --rc genhtml_branch_coverage=1 00:03:25.158 --rc genhtml_function_coverage=1 00:03:25.158 --rc genhtml_legend=1 00:03:25.158 --rc geninfo_all_blocks=1 00:03:25.158 --rc geninfo_unexecuted_blocks=1 00:03:25.158 00:03:25.158 ' 00:03:25.158 16:12:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:25.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:25.158 --rc genhtml_branch_coverage=1 00:03:25.158 --rc genhtml_function_coverage=1 00:03:25.158 --rc genhtml_legend=1 00:03:25.158 --rc geninfo_all_blocks=1 00:03:25.158 --rc geninfo_unexecuted_blocks=1 00:03:25.158 00:03:25.158 ' 00:03:25.158 16:12:44 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:25.158 16:12:44 -- nvmf/common.sh@7 -- # uname -s 00:03:25.158 16:12:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:25.158 16:12:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:25.158 16:12:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:25.158 16:12:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:25.158 16:12:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:25.158 16:12:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:25.158 16:12:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:25.158 16:12:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:25.158 16:12:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:25.158 16:12:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:25.158 16:12:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ca9637a-df03-470e-a17c-bcf9a22a1537 00:03:25.158 16:12:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ca9637a-df03-470e-a17c-bcf9a22a1537 00:03:25.158 16:12:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:25.158 16:12:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:25.158 16:12:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:25.158 16:12:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:25.158 16:12:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:25.158 16:12:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:25.158 16:12:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:25.158 16:12:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.158 16:12:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.158 16:12:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.158 16:12:44 -- paths/export.sh@5 -- # export PATH 00:03:25.158 16:12:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:25.158 16:12:44 -- nvmf/common.sh@46 -- # : 0 00:03:25.158 16:12:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:25.158 16:12:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:25.158 16:12:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:25.158 16:12:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:25.158 16:12:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:25.158 16:12:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:25.158 16:12:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:25.158 16:12:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:25.158 16:12:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:25.158 16:12:44 -- spdk/autotest.sh@32 -- # uname -s 00:03:25.158 16:12:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:25.158 16:12:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:25.158 16:12:44 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.158 16:12:44 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:25.158 16:12:44 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:25.158 16:12:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:25.158 16:12:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:25.158 16:12:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:25.158 16:12:44 -- spdk/autotest.sh@48 -- # udevadm_pid=48163 00:03:25.158 16:12:44 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:03:25.158 16:12:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:25.158 16:12:44 -- spdk/autotest.sh@54 -- # echo 48167 00:03:25.158 16:12:44 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:25.158 16:12:44 -- spdk/autotest.sh@56 -- # echo 48168 00:03:25.158 16:12:44 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:03:25.158 16:12:44 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:03:25.158 16:12:44 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:25.158 16:12:44 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:25.159 16:12:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:25.159 16:12:44 -- common/autotest_common.sh@10 -- # set +x 00:03:25.159 16:12:44 -- spdk/autotest.sh@70 -- # create_test_list 00:03:25.159 16:12:44 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:25.159 16:12:44 -- common/autotest_common.sh@10 -- # set +x 00:03:25.159 16:12:44 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:25.159 16:12:44 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:25.159 16:12:44 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:03:25.159 16:12:44 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:25.159 16:12:44 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:03:25.159 16:12:44 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:25.159 16:12:44 -- common/autotest_common.sh@1450 -- # uname 00:03:25.159 16:12:44 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:25.159 16:12:44 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:25.159 16:12:44 -- common/autotest_common.sh@1470 -- # uname 00:03:25.159 16:12:44 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:25.159 16:12:44 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:25.159 16:12:44 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:25.159 lcov: LCOV version 1.15 00:03:25.159 16:12:44 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:33.348 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:33.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:33.348 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:33.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:33.348 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:33.348 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:55.368 16:13:12 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:55.368 16:13:12 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:55.368 16:13:12 -- common/autotest_common.sh@10 -- # set +x 00:03:55.368 16:13:12 -- spdk/autotest.sh@89 -- # rm -f 00:03:55.368 16:13:12 -- spdk/autotest.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.368 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.368 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:03:55.368 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:03:55.368 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:03:55.368 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:03:55.368 16:13:13 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:55.368 16:13:13 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:55.368 16:13:13 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:55.368 16:13:13 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:55.368 16:13:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:55.368 16:13:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:55.368 16:13:13 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:55.368 16:13:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.368 16:13:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:55.368 16:13:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:55.368 16:13:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:55.368 16:13:13 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:55.368 16:13:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:55.368 16:13:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:55.368 16:13:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:55.368 16:13:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:55.368 16:13:13 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:55.368 16:13:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:55.368 16:13:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:55.368 16:13:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:55.368 16:13:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:55.368 16:13:13 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:55.368 16:13:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:55.368 16:13:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:55.368 16:13:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:55.368 16:13:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:55.368 16:13:13 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:55.368 16:13:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:55.368 16:13:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:55.368 16:13:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:55.368 16:13:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:55.368 16:13:13 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:55.368 16:13:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:55.368 16:13:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:55.368 16:13:13 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:55.368 16:13:13 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:55.368 16:13:13 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:55.368 16:13:13 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:55.369 16:13:13 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:55.369 16:13:13 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:55.369 16:13:13 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme2n2 /dev/nvme2n3 /dev/nvme3n1 00:03:55.369 16:13:13 -- spdk/autotest.sh@108 -- # grep -v p 00:03:55.369 16:13:13 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:55.369 16:13:13 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:55.369 16:13:13 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:55.369 16:13:13 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:55.369 16:13:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:55.369 No valid GPT data, bailing 00:03:55.369 16:13:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.369 16:13:13 -- scripts/common.sh@393 -- # pt= 00:03:55.369 16:13:13 -- scripts/common.sh@394 -- # return 1 00:03:55.369 16:13:13 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:55.369 1+0 records in 00:03:55.369 1+0 records out 00:03:55.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0314961 s, 33.3 MB/s 00:03:55.369 16:13:13 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:55.369 16:13:13 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:55.369 16:13:13 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme1n1 00:03:55.369 16:13:13 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:03:55.369 16:13:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:55.369 No valid GPT data, bailing 00:03:55.369 16:13:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:55.369 16:13:13 -- scripts/common.sh@393 -- # pt= 00:03:55.369 16:13:13 -- scripts/common.sh@394 -- # return 1 00:03:55.369 16:13:13 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:55.369 1+0 records in 00:03:55.369 1+0 records out 00:03:55.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00329267 s, 318 MB/s 00:03:55.369 16:13:13 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:55.369 16:13:13 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:55.369 16:13:13 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n1 00:03:55.369 16:13:13 -- scripts/common.sh@380 -- # local block=/dev/nvme2n1 pt 00:03:55.369 16:13:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:55.369 No valid GPT data, bailing 00:03:55.369 16:13:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:55.369 16:13:13 -- scripts/common.sh@393 -- # pt= 00:03:55.369 16:13:13 -- scripts/common.sh@394 -- # return 1 00:03:55.369 16:13:13 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:55.369 1+0 records in 00:03:55.369 1+0 records out 00:03:55.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489896 s, 214 MB/s 00:03:55.369 16:13:13 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:55.369 16:13:13 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:55.369 16:13:13 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n2 00:03:55.369 16:13:13 -- scripts/common.sh@380 -- # local block=/dev/nvme2n2 pt 00:03:55.369 16:13:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:03:55.369 No valid GPT data, bailing 00:03:55.369 16:13:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:03:55.369 16:13:13 -- scripts/common.sh@393 -- # pt= 00:03:55.369 16:13:13 -- scripts/common.sh@394 -- # return 1 00:03:55.369 16:13:13 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:03:55.369 1+0 records in 00:03:55.369 1+0 records out 00:03:55.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00481577 s, 218 MB/s 00:03:55.369 16:13:13 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:55.369 16:13:13 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:55.369 16:13:13 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme2n3 00:03:55.369 16:13:13 -- scripts/common.sh@380 -- # local block=/dev/nvme2n3 pt 00:03:55.369 16:13:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:03:55.369 No valid GPT data, bailing 00:03:55.369 16:13:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:03:55.369 16:13:13 -- scripts/common.sh@393 -- # pt= 00:03:55.369 16:13:13 -- scripts/common.sh@394 -- # return 1 00:03:55.369 16:13:13 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:03:55.369 1+0 records in 00:03:55.369 1+0 records out 00:03:55.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00587385 s, 179 MB/s 00:03:55.369 16:13:13 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:55.369 16:13:13 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:55.369 16:13:13 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme3n1 00:03:55.369 16:13:13 -- scripts/common.sh@380 -- # local block=/dev/nvme3n1 pt 00:03:55.369 16:13:13 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:55.369 No valid GPT data, bailing 00:03:55.369 16:13:13 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:55.369 16:13:13 -- scripts/common.sh@393 -- # pt= 00:03:55.369 16:13:13 -- scripts/common.sh@394 -- # return 1 00:03:55.369 16:13:13 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:55.369 1+0 records in 00:03:55.369 1+0 records out 00:03:55.369 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00573645 s, 183 MB/s 00:03:55.369 16:13:13 -- spdk/autotest.sh@116 -- # sync 00:03:55.369 16:13:14 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:55.369 16:13:14 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:55.369 16:13:14 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:56.314 16:13:15 -- spdk/autotest.sh@122 -- # uname -s 00:03:56.314 16:13:15 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:56.314 16:13:15 -- spdk/autotest.sh@123 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:56.314 16:13:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.314 16:13:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.314 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:03:56.314 ************************************ 00:03:56.314 START TEST setup.sh 00:03:56.314 ************************************ 00:03:56.314 16:13:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:03:56.314 * Looking for test storage... 00:03:56.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:56.314 16:13:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:56.314 16:13:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:56.314 16:13:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:56.314 16:13:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:56.314 16:13:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:56.314 16:13:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:56.314 16:13:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:56.314 16:13:16 -- scripts/common.sh@335 -- # IFS=.-: 00:03:56.314 16:13:16 -- scripts/common.sh@335 -- # read -ra ver1 00:03:56.314 16:13:16 -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.314 16:13:16 -- scripts/common.sh@336 -- # read -ra ver2 00:03:56.314 16:13:16 -- scripts/common.sh@337 -- # local 'op=<' 00:03:56.314 16:13:16 -- scripts/common.sh@339 -- # ver1_l=2 00:03:56.314 16:13:16 -- scripts/common.sh@340 -- # ver2_l=1 00:03:56.314 16:13:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:56.314 16:13:16 -- scripts/common.sh@343 -- # case "$op" in 00:03:56.314 16:13:16 -- scripts/common.sh@344 -- # : 1 00:03:56.314 16:13:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:56.314 16:13:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.314 16:13:16 -- scripts/common.sh@364 -- # decimal 1 00:03:56.314 16:13:16 -- scripts/common.sh@352 -- # local d=1 00:03:56.314 16:13:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.314 16:13:16 -- scripts/common.sh@354 -- # echo 1 00:03:56.314 16:13:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:56.314 16:13:16 -- scripts/common.sh@365 -- # decimal 2 00:03:56.314 16:13:16 -- scripts/common.sh@352 -- # local d=2 00:03:56.314 16:13:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.314 16:13:16 -- scripts/common.sh@354 -- # echo 2 00:03:56.314 16:13:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:56.314 16:13:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:56.314 16:13:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:56.314 16:13:16 -- scripts/common.sh@367 -- # return 0 00:03:56.314 16:13:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.314 16:13:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:56.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.314 --rc genhtml_branch_coverage=1 00:03:56.314 --rc genhtml_function_coverage=1 00:03:56.314 --rc genhtml_legend=1 00:03:56.314 --rc geninfo_all_blocks=1 00:03:56.314 --rc geninfo_unexecuted_blocks=1 00:03:56.314 00:03:56.314 ' 00:03:56.314 16:13:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:56.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.314 --rc genhtml_branch_coverage=1 00:03:56.314 --rc genhtml_function_coverage=1 00:03:56.314 --rc genhtml_legend=1 00:03:56.314 --rc geninfo_all_blocks=1 00:03:56.314 --rc geninfo_unexecuted_blocks=1 00:03:56.314 00:03:56.314 ' 00:03:56.314 16:13:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:56.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.314 --rc genhtml_branch_coverage=1 00:03:56.314 --rc genhtml_function_coverage=1 00:03:56.314 --rc genhtml_legend=1 00:03:56.314 --rc geninfo_all_blocks=1 00:03:56.314 --rc geninfo_unexecuted_blocks=1 00:03:56.314 00:03:56.314 ' 00:03:56.314 16:13:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:56.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.314 --rc genhtml_branch_coverage=1 00:03:56.314 --rc genhtml_function_coverage=1 00:03:56.314 --rc genhtml_legend=1 00:03:56.314 --rc geninfo_all_blocks=1 00:03:56.314 --rc geninfo_unexecuted_blocks=1 00:03:56.314 00:03:56.314 ' 00:03:56.314 16:13:16 -- setup/test-setup.sh@10 -- # uname -s 00:03:56.314 16:13:16 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:56.314 16:13:16 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:56.314 16:13:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:56.314 16:13:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:56.314 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:03:56.314 ************************************ 00:03:56.314 START TEST acl 00:03:56.314 ************************************ 00:03:56.314 16:13:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:03:56.575 * Looking for test storage... 00:03:56.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:03:56.575 16:13:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:56.575 16:13:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:56.575 16:13:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:56.575 16:13:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:56.575 16:13:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:56.575 16:13:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:56.575 16:13:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:56.575 16:13:16 -- scripts/common.sh@335 -- # IFS=.-: 00:03:56.575 16:13:16 -- scripts/common.sh@335 -- # read -ra ver1 00:03:56.575 16:13:16 -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.575 16:13:16 -- scripts/common.sh@336 -- # read -ra ver2 00:03:56.575 16:13:16 -- scripts/common.sh@337 -- # local 'op=<' 00:03:56.575 16:13:16 -- scripts/common.sh@339 -- # ver1_l=2 00:03:56.575 16:13:16 -- scripts/common.sh@340 -- # ver2_l=1 00:03:56.575 16:13:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:56.575 16:13:16 -- scripts/common.sh@343 -- # case "$op" in 00:03:56.575 16:13:16 -- scripts/common.sh@344 -- # : 1 00:03:56.575 16:13:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:56.575 16:13:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.576 16:13:16 -- scripts/common.sh@364 -- # decimal 1 00:03:56.576 16:13:16 -- scripts/common.sh@352 -- # local d=1 00:03:56.576 16:13:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.576 16:13:16 -- scripts/common.sh@354 -- # echo 1 00:03:56.576 16:13:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:56.576 16:13:16 -- scripts/common.sh@365 -- # decimal 2 00:03:56.576 16:13:16 -- scripts/common.sh@352 -- # local d=2 00:03:56.576 16:13:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.576 16:13:16 -- scripts/common.sh@354 -- # echo 2 00:03:56.576 16:13:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:56.576 16:13:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:56.576 16:13:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:56.576 16:13:16 -- scripts/common.sh@367 -- # return 0 00:03:56.576 16:13:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.576 16:13:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:56.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.576 --rc genhtml_branch_coverage=1 00:03:56.576 --rc genhtml_function_coverage=1 00:03:56.576 --rc genhtml_legend=1 00:03:56.576 --rc geninfo_all_blocks=1 00:03:56.576 --rc geninfo_unexecuted_blocks=1 00:03:56.576 00:03:56.576 ' 00:03:56.576 16:13:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:56.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.576 --rc genhtml_branch_coverage=1 00:03:56.576 --rc genhtml_function_coverage=1 00:03:56.576 --rc genhtml_legend=1 00:03:56.576 --rc geninfo_all_blocks=1 00:03:56.576 --rc geninfo_unexecuted_blocks=1 00:03:56.576 00:03:56.576 ' 00:03:56.576 16:13:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:56.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.576 --rc genhtml_branch_coverage=1 00:03:56.576 --rc genhtml_function_coverage=1 00:03:56.576 --rc genhtml_legend=1 00:03:56.576 --rc geninfo_all_blocks=1 00:03:56.576 --rc geninfo_unexecuted_blocks=1 00:03:56.576 00:03:56.576 ' 00:03:56.576 16:13:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:56.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.576 --rc genhtml_branch_coverage=1 00:03:56.576 --rc genhtml_function_coverage=1 00:03:56.576 --rc genhtml_legend=1 00:03:56.576 --rc geninfo_all_blocks=1 00:03:56.576 --rc geninfo_unexecuted_blocks=1 00:03:56.576 00:03:56.576 ' 00:03:56.576 16:13:16 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:56.576 16:13:16 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:56.576 16:13:16 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:56.576 16:13:16 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:56.576 16:13:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.576 16:13:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:56.576 16:13:16 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:56.576 16:13:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.576 16:13:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:03:56.576 16:13:16 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:03:56.576 16:13:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.576 16:13:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:03:56.576 16:13:16 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:03:56.576 16:13:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.576 16:13:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n2 00:03:56.576 16:13:16 -- common/autotest_common.sh@1657 -- # local device=nvme2n2 00:03:56.576 16:13:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.576 16:13:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n3 00:03:56.576 16:13:16 -- common/autotest_common.sh@1657 -- # local device=nvme2n3 00:03:56.576 16:13:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.576 16:13:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3c3n1 00:03:56.576 16:13:16 -- common/autotest_common.sh@1657 -- # local device=nvme3c3n1 00:03:56.576 16:13:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:56.576 16:13:16 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:03:56.576 16:13:16 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:03:56.576 16:13:16 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:03:56.576 16:13:16 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:56.576 16:13:16 -- setup/acl.sh@12 -- # devs=() 00:03:56.576 16:13:16 -- setup/acl.sh@12 -- # declare -a devs 00:03:56.576 16:13:16 -- setup/acl.sh@13 -- # drivers=() 00:03:56.576 16:13:16 -- setup/acl.sh@13 -- # declare -A drivers 00:03:56.576 16:13:16 -- setup/acl.sh@51 -- # setup reset 00:03:56.576 16:13:16 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.576 16:13:16 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:57.520 16:13:17 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:57.520 16:13:17 -- setup/acl.sh@16 -- # local dev driver 00:03:57.520 16:13:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.520 16:13:17 -- setup/acl.sh@15 -- # setup output status 00:03:57.520 16:13:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.520 16:13:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:57.781 Hugepages 00:03:57.781 node hugesize free / total 00:03:57.781 16:13:17 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:57.781 16:13:17 -- setup/acl.sh@19 -- # continue 00:03:57.781 16:13:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.781 00:03:57.781 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.781 16:13:17 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:57.781 16:13:17 -- setup/acl.sh@19 -- # continue 00:03:57.781 16:13:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:57.781 16:13:17 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:03:57.781 16:13:17 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:03:57.781 16:13:17 -- setup/acl.sh@20 -- # continue 00:03:57.781 16:13:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.043 16:13:17 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:03:58.043 16:13:17 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.043 16:13:17 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:03:58.043 16:13:17 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.043 16:13:17 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.043 16:13:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.043 16:13:17 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:03:58.043 16:13:17 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.043 16:13:17 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:03:58.043 16:13:17 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.043 16:13:17 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.043 16:13:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.043 16:13:17 -- setup/acl.sh@19 -- # [[ 0000:00:08.0 == *:*:*.* ]] 00:03:58.043 16:13:17 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.043 16:13:17 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:03:58.043 16:13:17 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.043 16:13:17 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.043 16:13:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.043 16:13:17 -- setup/acl.sh@19 -- # [[ 0000:00:09.0 == *:*:*.* ]] 00:03:58.043 16:13:17 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:58.043 16:13:17 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:03:58.043 16:13:17 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:58.043 16:13:17 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:58.044 16:13:17 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:58.044 16:13:17 -- setup/acl.sh@24 -- # (( 4 > 0 )) 00:03:58.044 16:13:17 -- setup/acl.sh@54 -- # run_test denied denied 00:03:58.044 16:13:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:58.044 16:13:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:58.044 16:13:17 -- common/autotest_common.sh@10 -- # set +x 00:03:58.044 ************************************ 00:03:58.044 START TEST denied 00:03:58.044 ************************************ 00:03:58.044 16:13:17 -- common/autotest_common.sh@1114 -- # denied 00:03:58.044 16:13:17 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:03:58.044 16:13:17 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:03:58.044 16:13:17 -- setup/acl.sh@38 -- # setup output config 00:03:58.044 16:13:17 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:58.044 16:13:17 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:03:59.431 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:03:59.432 16:13:18 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:03:59.432 16:13:18 -- setup/acl.sh@28 -- # local dev driver 00:03:59.432 16:13:18 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:59.432 16:13:18 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:03:59.432 16:13:18 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:03:59.432 16:13:18 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:59.432 16:13:18 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:59.432 16:13:18 -- setup/acl.sh@41 -- # setup reset 00:03:59.432 16:13:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:59.432 16:13:18 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:06.019 00:04:06.019 real 0m7.050s 00:04:06.019 user 0m0.706s 00:04:06.019 sys 0m1.143s 00:04:06.019 16:13:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:06.019 ************************************ 00:04:06.019 END TEST denied 00:04:06.019 ************************************ 00:04:06.019 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:04:06.019 16:13:24 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:06.019 16:13:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:06.019 16:13:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:06.019 16:13:24 -- common/autotest_common.sh@10 -- # set +x 00:04:06.019 ************************************ 00:04:06.019 START TEST allowed 00:04:06.019 ************************************ 00:04:06.019 16:13:24 -- common/autotest_common.sh@1114 -- # allowed 00:04:06.019 16:13:24 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:04:06.019 16:13:24 -- setup/acl.sh@45 -- # setup output config 00:04:06.019 16:13:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.019 16:13:24 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:04:06.019 16:13:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:06.279 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:06.279 16:13:25 -- setup/acl.sh@47 -- # verify 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:06.279 16:13:25 -- setup/acl.sh@28 -- # local dev driver 00:04:06.279 16:13:25 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:06.279 16:13:25 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:04:06.279 16:13:25 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:04:06.279 16:13:25 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:06.279 16:13:25 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:06.279 16:13:25 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:06.279 16:13:25 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:08.0 ]] 00:04:06.280 16:13:25 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:08.0/driver 00:04:06.280 16:13:25 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:06.280 16:13:25 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:06.280 16:13:25 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:06.280 16:13:25 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:09.0 ]] 00:04:06.280 16:13:25 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:09.0/driver 00:04:06.280 16:13:25 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:06.280 16:13:25 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:06.280 16:13:25 -- setup/acl.sh@48 -- # setup reset 00:04:06.280 16:13:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.280 16:13:25 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:07.668 00:04:07.668 real 0m2.131s 00:04:07.668 user 0m0.831s 00:04:07.668 sys 0m1.044s 00:04:07.668 16:13:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.668 ************************************ 00:04:07.668 END TEST allowed 00:04:07.668 ************************************ 00:04:07.668 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:04:07.668 00:04:07.668 real 0m11.002s 00:04:07.668 user 0m2.262s 00:04:07.668 sys 0m3.112s 00:04:07.668 16:13:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:07.668 ************************************ 00:04:07.668 END TEST acl 00:04:07.668 ************************************ 00:04:07.668 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:04:07.668 16:13:27 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:07.668 16:13:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.668 16:13:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.668 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:04:07.668 ************************************ 00:04:07.668 START TEST hugepages 00:04:07.668 ************************************ 00:04:07.668 16:13:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:04:07.668 * Looking for test storage... 00:04:07.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:07.668 16:13:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:07.668 16:13:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:07.668 16:13:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:07.668 16:13:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:07.668 16:13:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:07.668 16:13:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:07.668 16:13:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:07.668 16:13:27 -- scripts/common.sh@335 -- # IFS=.-: 00:04:07.668 16:13:27 -- scripts/common.sh@335 -- # read -ra ver1 00:04:07.668 16:13:27 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.668 16:13:27 -- scripts/common.sh@336 -- # read -ra ver2 00:04:07.668 16:13:27 -- scripts/common.sh@337 -- # local 'op=<' 00:04:07.668 16:13:27 -- scripts/common.sh@339 -- # ver1_l=2 00:04:07.668 16:13:27 -- scripts/common.sh@340 -- # ver2_l=1 00:04:07.668 16:13:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:07.668 16:13:27 -- scripts/common.sh@343 -- # case "$op" in 00:04:07.668 16:13:27 -- scripts/common.sh@344 -- # : 1 00:04:07.668 16:13:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:07.668 16:13:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.668 16:13:27 -- scripts/common.sh@364 -- # decimal 1 00:04:07.668 16:13:27 -- scripts/common.sh@352 -- # local d=1 00:04:07.668 16:13:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.668 16:13:27 -- scripts/common.sh@354 -- # echo 1 00:04:07.668 16:13:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:07.668 16:13:27 -- scripts/common.sh@365 -- # decimal 2 00:04:07.668 16:13:27 -- scripts/common.sh@352 -- # local d=2 00:04:07.668 16:13:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.668 16:13:27 -- scripts/common.sh@354 -- # echo 2 00:04:07.668 16:13:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:07.668 16:13:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:07.668 16:13:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:07.668 16:13:27 -- scripts/common.sh@367 -- # return 0 00:04:07.668 16:13:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.668 16:13:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:07.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.668 --rc genhtml_branch_coverage=1 00:04:07.668 --rc genhtml_function_coverage=1 00:04:07.668 --rc genhtml_legend=1 00:04:07.668 --rc geninfo_all_blocks=1 00:04:07.668 --rc geninfo_unexecuted_blocks=1 00:04:07.668 00:04:07.668 ' 00:04:07.668 16:13:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:07.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.668 --rc genhtml_branch_coverage=1 00:04:07.668 --rc genhtml_function_coverage=1 00:04:07.668 --rc genhtml_legend=1 00:04:07.668 --rc geninfo_all_blocks=1 00:04:07.668 --rc geninfo_unexecuted_blocks=1 00:04:07.668 00:04:07.668 ' 00:04:07.668 16:13:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:07.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.668 --rc genhtml_branch_coverage=1 00:04:07.668 --rc genhtml_function_coverage=1 00:04:07.668 --rc genhtml_legend=1 00:04:07.668 --rc geninfo_all_blocks=1 00:04:07.668 --rc geninfo_unexecuted_blocks=1 00:04:07.668 00:04:07.668 ' 00:04:07.668 16:13:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:07.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.668 --rc genhtml_branch_coverage=1 00:04:07.668 --rc genhtml_function_coverage=1 00:04:07.668 --rc genhtml_legend=1 00:04:07.668 --rc geninfo_all_blocks=1 00:04:07.668 --rc geninfo_unexecuted_blocks=1 00:04:07.668 00:04:07.668 ' 00:04:07.668 16:13:27 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:07.668 16:13:27 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:07.668 16:13:27 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:07.668 16:13:27 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:07.668 16:13:27 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:07.668 16:13:27 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:07.668 16:13:27 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:07.668 16:13:27 -- setup/common.sh@18 -- # local node= 00:04:07.668 16:13:27 -- setup/common.sh@19 -- # local var val 00:04:07.668 16:13:27 -- setup/common.sh@20 -- # local mem_f mem 00:04:07.668 16:13:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:07.668 16:13:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:07.668 16:13:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:07.668 16:13:27 -- setup/common.sh@28 -- # mapfile -t mem 00:04:07.668 16:13:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:07.668 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.668 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 5792072 kB' 'MemAvailable: 7347652 kB' 'Buffers: 3704 kB' 'Cached: 1767568 kB' 'SwapCached: 0 kB' 'Active: 465620 kB' 'Inactive: 1421516 kB' 'Active(anon): 126396 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421516 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 264 kB' 'Writeback: 0 kB' 'AnonPages: 117776 kB' 'Mapped: 51116 kB' 'Shmem: 10532 kB' 'KReclaimable: 63536 kB' 'Slab: 161900 kB' 'SReclaimable: 63536 kB' 'SUnreclaim: 98364 kB' 'KernelStack: 6640 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12410000 kB' 'Committed_AS: 309372 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55560 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.669 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.669 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # continue 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # IFS=': ' 00:04:07.670 16:13:27 -- setup/common.sh@31 -- # read -r var val _ 00:04:07.670 16:13:27 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:07.670 16:13:27 -- setup/common.sh@33 -- # echo 2048 00:04:07.670 16:13:27 -- setup/common.sh@33 -- # return 0 00:04:07.670 16:13:27 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:07.670 16:13:27 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:07.670 16:13:27 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:07.670 16:13:27 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:07.670 16:13:27 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:07.670 16:13:27 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:07.670 16:13:27 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:07.670 16:13:27 -- setup/hugepages.sh@207 -- # get_nodes 00:04:07.670 16:13:27 -- setup/hugepages.sh@27 -- # local node 00:04:07.670 16:13:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:07.670 16:13:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:07.670 16:13:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:07.670 16:13:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:07.670 16:13:27 -- setup/hugepages.sh@208 -- # clear_hp 00:04:07.670 16:13:27 -- setup/hugepages.sh@37 -- # local node hp 00:04:07.670 16:13:27 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:07.670 16:13:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.670 16:13:27 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.670 16:13:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:07.670 16:13:27 -- setup/hugepages.sh@41 -- # echo 0 00:04:07.670 16:13:27 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:07.670 16:13:27 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:07.670 16:13:27 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:07.670 16:13:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:07.670 16:13:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:07.670 16:13:27 -- common/autotest_common.sh@10 -- # set +x 00:04:07.670 ************************************ 00:04:07.670 START TEST default_setup 00:04:07.670 ************************************ 00:04:07.670 16:13:27 -- common/autotest_common.sh@1114 -- # default_setup 00:04:07.670 16:13:27 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:07.670 16:13:27 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:07.670 16:13:27 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:07.670 16:13:27 -- setup/hugepages.sh@51 -- # shift 00:04:07.670 16:13:27 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:07.670 16:13:27 -- setup/hugepages.sh@52 -- # local node_ids 00:04:07.670 16:13:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:07.670 16:13:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:07.670 16:13:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:07.670 16:13:27 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:07.670 16:13:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:07.670 16:13:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:07.670 16:13:27 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:07.670 16:13:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:07.670 16:13:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:07.670 16:13:27 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:07.670 16:13:27 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:07.670 16:13:27 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:07.670 16:13:27 -- setup/hugepages.sh@73 -- # return 0 00:04:07.670 16:13:27 -- setup/hugepages.sh@137 -- # setup output 00:04:07.670 16:13:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:07.670 16:13:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:08.636 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.636 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.636 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.636 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.901 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.901 16:13:28 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:08.901 16:13:28 -- setup/hugepages.sh@89 -- # local node 00:04:08.901 16:13:28 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:08.901 16:13:28 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:08.901 16:13:28 -- setup/hugepages.sh@92 -- # local surp 00:04:08.901 16:13:28 -- setup/hugepages.sh@93 -- # local resv 00:04:08.901 16:13:28 -- setup/hugepages.sh@94 -- # local anon 00:04:08.901 16:13:28 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:08.901 16:13:28 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:08.901 16:13:28 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:08.901 16:13:28 -- setup/common.sh@18 -- # local node= 00:04:08.901 16:13:28 -- setup/common.sh@19 -- # local var val 00:04:08.901 16:13:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.901 16:13:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.901 16:13:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.901 16:13:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.901 16:13:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.901 16:13:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.901 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.901 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.901 16:13:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7860292 kB' 'MemAvailable: 9415704 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467132 kB' 'Inactive: 1421524 kB' 'Active(anon): 127908 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118808 kB' 'Mapped: 50808 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161476 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98292 kB' 'KernelStack: 6560 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:08.901 16:13:28 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.901 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.901 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.901 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.901 16:13:28 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.901 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.901 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.901 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.901 16:13:28 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.901 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.901 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.901 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.901 16:13:28 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.902 16:13:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:08.902 16:13:28 -- setup/common.sh@33 -- # echo 0 00:04:08.902 16:13:28 -- setup/common.sh@33 -- # return 0 00:04:08.902 16:13:28 -- setup/hugepages.sh@97 -- # anon=0 00:04:08.902 16:13:28 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:08.902 16:13:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.902 16:13:28 -- setup/common.sh@18 -- # local node= 00:04:08.902 16:13:28 -- setup/common.sh@19 -- # local var val 00:04:08.902 16:13:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.902 16:13:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.902 16:13:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.902 16:13:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.902 16:13:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.902 16:13:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.902 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7860752 kB' 'MemAvailable: 9416164 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 466916 kB' 'Inactive: 1421524 kB' 'Active(anon): 127692 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118800 kB' 'Mapped: 50800 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161460 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98276 kB' 'KernelStack: 6496 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.903 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.903 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.904 16:13:28 -- setup/common.sh@33 -- # echo 0 00:04:08.904 16:13:28 -- setup/common.sh@33 -- # return 0 00:04:08.904 16:13:28 -- setup/hugepages.sh@99 -- # surp=0 00:04:08.904 16:13:28 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:08.904 16:13:28 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:08.904 16:13:28 -- setup/common.sh@18 -- # local node= 00:04:08.904 16:13:28 -- setup/common.sh@19 -- # local var val 00:04:08.904 16:13:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.904 16:13:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.904 16:13:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.904 16:13:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.904 16:13:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.904 16:13:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7860752 kB' 'MemAvailable: 9416164 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 466884 kB' 'Inactive: 1421524 kB' 'Active(anon): 127660 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118760 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161468 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98284 kB' 'KernelStack: 6496 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.904 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.904 16:13:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.905 16:13:28 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:08.905 16:13:28 -- setup/common.sh@33 -- # echo 0 00:04:08.905 16:13:28 -- setup/common.sh@33 -- # return 0 00:04:08.905 16:13:28 -- setup/hugepages.sh@100 -- # resv=0 00:04:08.905 16:13:28 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:08.905 nr_hugepages=1024 00:04:08.905 16:13:28 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:08.905 resv_hugepages=0 00:04:08.905 16:13:28 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:08.905 surplus_hugepages=0 00:04:08.905 16:13:28 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:08.905 anon_hugepages=0 00:04:08.905 16:13:28 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.905 16:13:28 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:08.905 16:13:28 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:08.905 16:13:28 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:08.905 16:13:28 -- setup/common.sh@18 -- # local node= 00:04:08.905 16:13:28 -- setup/common.sh@19 -- # local var val 00:04:08.905 16:13:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.905 16:13:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.905 16:13:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:08.905 16:13:28 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:08.905 16:13:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.905 16:13:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.905 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7860752 kB' 'MemAvailable: 9416164 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 466908 kB' 'Inactive: 1421524 kB' 'Active(anon): 127684 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118736 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161468 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98284 kB' 'KernelStack: 6480 kB' 'PageTables: 3804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.906 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.906 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:08.907 16:13:28 -- setup/common.sh@33 -- # echo 1024 00:04:08.907 16:13:28 -- setup/common.sh@33 -- # return 0 00:04:08.907 16:13:28 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:08.907 16:13:28 -- setup/hugepages.sh@112 -- # get_nodes 00:04:08.907 16:13:28 -- setup/hugepages.sh@27 -- # local node 00:04:08.907 16:13:28 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:08.907 16:13:28 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:08.907 16:13:28 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:08.907 16:13:28 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:08.907 16:13:28 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:08.907 16:13:28 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:08.907 16:13:28 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:08.907 16:13:28 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:08.907 16:13:28 -- setup/common.sh@18 -- # local node=0 00:04:08.907 16:13:28 -- setup/common.sh@19 -- # local var val 00:04:08.907 16:13:28 -- setup/common.sh@20 -- # local mem_f mem 00:04:08.907 16:13:28 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:08.907 16:13:28 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:08.907 16:13:28 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:08.907 16:13:28 -- setup/common.sh@28 -- # mapfile -t mem 00:04:08.907 16:13:28 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7861216 kB' 'MemUsed: 4375880 kB' 'SwapCached: 0 kB' 'Active: 466868 kB' 'Inactive: 1421524 kB' 'Active(anon): 127644 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421524 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1771264 kB' 'Mapped: 50700 kB' 'AnonPages: 118712 kB' 'Shmem: 10492 kB' 'KernelStack: 6532 kB' 'PageTables: 3756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63184 kB' 'Slab: 161468 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98284 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.907 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.907 16:13:28 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # continue 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # IFS=': ' 00:04:08.908 16:13:28 -- setup/common.sh@31 -- # read -r var val _ 00:04:08.908 16:13:28 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:08.908 16:13:28 -- setup/common.sh@33 -- # echo 0 00:04:08.908 16:13:28 -- setup/common.sh@33 -- # return 0 00:04:08.908 16:13:28 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:08.908 16:13:28 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:08.908 16:13:28 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:08.908 16:13:28 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:08.908 node0=1024 expecting 1024 00:04:08.908 ************************************ 00:04:08.908 END TEST default_setup 00:04:08.908 ************************************ 00:04:08.908 16:13:28 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:08.908 16:13:28 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:08.908 00:04:08.908 real 0m1.279s 00:04:08.908 user 0m0.502s 00:04:08.908 sys 0m0.621s 00:04:08.908 16:13:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:08.908 16:13:28 -- common/autotest_common.sh@10 -- # set +x 00:04:08.908 16:13:28 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:08.908 16:13:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:08.908 16:13:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:08.908 16:13:28 -- common/autotest_common.sh@10 -- # set +x 00:04:08.908 ************************************ 00:04:08.908 START TEST per_node_1G_alloc 00:04:08.908 ************************************ 00:04:08.908 16:13:28 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:08.908 16:13:28 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:08.908 16:13:28 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:04:08.908 16:13:28 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:08.908 16:13:28 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:08.908 16:13:28 -- setup/hugepages.sh@51 -- # shift 00:04:08.908 16:13:28 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:08.908 16:13:28 -- setup/hugepages.sh@52 -- # local node_ids 00:04:08.908 16:13:28 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:08.908 16:13:28 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:08.908 16:13:28 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:08.908 16:13:28 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:08.908 16:13:28 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:08.908 16:13:28 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:08.908 16:13:28 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:08.908 16:13:28 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:08.908 16:13:28 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:08.908 16:13:28 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:08.908 16:13:28 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:08.908 16:13:28 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:08.908 16:13:28 -- setup/hugepages.sh@73 -- # return 0 00:04:08.908 16:13:28 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:08.908 16:13:28 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:04:08.908 16:13:28 -- setup/hugepages.sh@146 -- # setup output 00:04:08.908 16:13:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.908 16:13:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:09.483 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.483 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.483 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.483 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:09.483 16:13:29 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:04:09.483 16:13:29 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:09.483 16:13:29 -- setup/hugepages.sh@89 -- # local node 00:04:09.483 16:13:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.483 16:13:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.483 16:13:29 -- setup/hugepages.sh@92 -- # local surp 00:04:09.483 16:13:29 -- setup/hugepages.sh@93 -- # local resv 00:04:09.483 16:13:29 -- setup/hugepages.sh@94 -- # local anon 00:04:09.483 16:13:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.483 16:13:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.483 16:13:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.483 16:13:29 -- setup/common.sh@18 -- # local node= 00:04:09.483 16:13:29 -- setup/common.sh@19 -- # local var val 00:04:09.483 16:13:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.483 16:13:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.483 16:13:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.483 16:13:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.483 16:13:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.483 16:13:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.483 16:13:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8924260 kB' 'MemAvailable: 10479696 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467068 kB' 'Inactive: 1421548 kB' 'Active(anon): 127844 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118888 kB' 'Mapped: 50872 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161520 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98336 kB' 'KernelStack: 6576 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.483 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.483 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.484 16:13:29 -- setup/common.sh@33 -- # echo 0 00:04:09.484 16:13:29 -- setup/common.sh@33 -- # return 0 00:04:09.484 16:13:29 -- setup/hugepages.sh@97 -- # anon=0 00:04:09.484 16:13:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.484 16:13:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.484 16:13:29 -- setup/common.sh@18 -- # local node= 00:04:09.484 16:13:29 -- setup/common.sh@19 -- # local var val 00:04:09.484 16:13:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.484 16:13:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.484 16:13:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.484 16:13:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.484 16:13:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.484 16:13:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.484 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.484 16:13:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8924008 kB' 'MemAvailable: 10479444 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 466936 kB' 'Inactive: 1421548 kB' 'Active(anon): 127712 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118844 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161520 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98336 kB' 'KernelStack: 6528 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.485 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.485 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.486 16:13:29 -- setup/common.sh@33 -- # echo 0 00:04:09.486 16:13:29 -- setup/common.sh@33 -- # return 0 00:04:09.486 16:13:29 -- setup/hugepages.sh@99 -- # surp=0 00:04:09.486 16:13:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.486 16:13:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.486 16:13:29 -- setup/common.sh@18 -- # local node= 00:04:09.486 16:13:29 -- setup/common.sh@19 -- # local var val 00:04:09.486 16:13:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.486 16:13:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.486 16:13:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.486 16:13:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.486 16:13:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.486 16:13:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8923756 kB' 'MemAvailable: 10479192 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 466944 kB' 'Inactive: 1421548 kB' 'Active(anon): 127720 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118848 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161520 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98336 kB' 'KernelStack: 6528 kB' 'PageTables: 3952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.486 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.486 16:13:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.487 16:13:29 -- setup/common.sh@33 -- # echo 0 00:04:09.487 16:13:29 -- setup/common.sh@33 -- # return 0 00:04:09.487 16:13:29 -- setup/hugepages.sh@100 -- # resv=0 00:04:09.487 nr_hugepages=512 00:04:09.487 resv_hugepages=0 00:04:09.487 surplus_hugepages=0 00:04:09.487 16:13:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:09.487 16:13:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.487 16:13:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.487 anon_hugepages=0 00:04:09.487 16:13:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.487 16:13:29 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:09.487 16:13:29 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:09.487 16:13:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.487 16:13:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.487 16:13:29 -- setup/common.sh@18 -- # local node= 00:04:09.487 16:13:29 -- setup/common.sh@19 -- # local var val 00:04:09.487 16:13:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.487 16:13:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.487 16:13:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.487 16:13:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.487 16:13:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.487 16:13:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8923756 kB' 'MemAvailable: 10479192 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467112 kB' 'Inactive: 1421548 kB' 'Active(anon): 127888 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118968 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161516 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98332 kB' 'KernelStack: 6496 kB' 'PageTables: 3852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.487 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.487 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.488 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.488 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.489 16:13:29 -- setup/common.sh@33 -- # echo 512 00:04:09.489 16:13:29 -- setup/common.sh@33 -- # return 0 00:04:09.489 16:13:29 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:09.489 16:13:29 -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.489 16:13:29 -- setup/hugepages.sh@27 -- # local node 00:04:09.489 16:13:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.489 16:13:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.489 16:13:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:09.489 16:13:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.489 16:13:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.489 16:13:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.489 16:13:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.489 16:13:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.489 16:13:29 -- setup/common.sh@18 -- # local node=0 00:04:09.489 16:13:29 -- setup/common.sh@19 -- # local var val 00:04:09.489 16:13:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:09.489 16:13:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.489 16:13:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.489 16:13:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.489 16:13:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.489 16:13:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8923756 kB' 'MemUsed: 3313340 kB' 'SwapCached: 0 kB' 'Active: 466684 kB' 'Inactive: 1421548 kB' 'Active(anon): 127460 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1771264 kB' 'Mapped: 50700 kB' 'AnonPages: 118544 kB' 'Shmem: 10492 kB' 'KernelStack: 6496 kB' 'PageTables: 3856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63184 kB' 'Slab: 161516 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98332 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.489 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.489 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # continue 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:09.490 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:09.490 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.490 16:13:29 -- setup/common.sh@33 -- # echo 0 00:04:09.490 16:13:29 -- setup/common.sh@33 -- # return 0 00:04:09.490 node0=512 expecting 512 00:04:09.490 ************************************ 00:04:09.490 END TEST per_node_1G_alloc 00:04:09.490 ************************************ 00:04:09.490 16:13:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.490 16:13:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.490 16:13:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.490 16:13:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.490 16:13:29 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:09.490 16:13:29 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:09.490 00:04:09.490 real 0m0.593s 00:04:09.490 user 0m0.247s 00:04:09.490 sys 0m0.358s 00:04:09.490 16:13:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:09.490 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:04:09.751 16:13:29 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:09.751 16:13:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:09.751 16:13:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:09.751 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:04:09.751 ************************************ 00:04:09.751 START TEST even_2G_alloc 00:04:09.751 ************************************ 00:04:09.751 16:13:29 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:09.751 16:13:29 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:09.751 16:13:29 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.751 16:13:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.751 16:13:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.751 16:13:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.751 16:13:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.751 16:13:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.751 16:13:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.751 16:13:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.751 16:13:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:09.751 16:13:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.751 16:13:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.751 16:13:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.751 16:13:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:09.751 16:13:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.751 16:13:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:04:09.751 16:13:29 -- setup/hugepages.sh@83 -- # : 0 00:04:09.751 16:13:29 -- setup/hugepages.sh@84 -- # : 0 00:04:09.751 16:13:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.751 16:13:29 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:09.751 16:13:29 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:09.752 16:13:29 -- setup/hugepages.sh@153 -- # setup output 00:04:09.752 16:13:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.752 16:13:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.016 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.016 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.016 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.016 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.016 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.016 16:13:29 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:10.016 16:13:29 -- setup/hugepages.sh@89 -- # local node 00:04:10.016 16:13:29 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.016 16:13:29 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.016 16:13:29 -- setup/hugepages.sh@92 -- # local surp 00:04:10.016 16:13:29 -- setup/hugepages.sh@93 -- # local resv 00:04:10.016 16:13:29 -- setup/hugepages.sh@94 -- # local anon 00:04:10.017 16:13:29 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.017 16:13:29 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.017 16:13:29 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.017 16:13:29 -- setup/common.sh@18 -- # local node= 00:04:10.017 16:13:29 -- setup/common.sh@19 -- # local var val 00:04:10.017 16:13:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.017 16:13:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.017 16:13:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.017 16:13:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.017 16:13:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.017 16:13:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.017 16:13:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7869952 kB' 'MemAvailable: 9425388 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467784 kB' 'Inactive: 1421548 kB' 'Active(anon): 128560 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119372 kB' 'Mapped: 51004 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161492 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98308 kB' 'KernelStack: 6560 kB' 'PageTables: 4052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.017 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.017 16:13:29 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.283 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.283 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.284 16:13:29 -- setup/common.sh@33 -- # echo 0 00:04:10.284 16:13:29 -- setup/common.sh@33 -- # return 0 00:04:10.284 16:13:29 -- setup/hugepages.sh@97 -- # anon=0 00:04:10.284 16:13:29 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.284 16:13:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.284 16:13:29 -- setup/common.sh@18 -- # local node= 00:04:10.284 16:13:29 -- setup/common.sh@19 -- # local var val 00:04:10.284 16:13:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.284 16:13:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.284 16:13:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.284 16:13:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.284 16:13:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.284 16:13:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7869952 kB' 'MemAvailable: 9425388 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467140 kB' 'Inactive: 1421548 kB' 'Active(anon): 127916 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118972 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161524 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98340 kB' 'KernelStack: 6528 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.284 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.284 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.285 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.285 16:13:29 -- setup/common.sh@33 -- # echo 0 00:04:10.285 16:13:29 -- setup/common.sh@33 -- # return 0 00:04:10.285 16:13:29 -- setup/hugepages.sh@99 -- # surp=0 00:04:10.285 16:13:29 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.285 16:13:29 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.285 16:13:29 -- setup/common.sh@18 -- # local node= 00:04:10.285 16:13:29 -- setup/common.sh@19 -- # local var val 00:04:10.285 16:13:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.285 16:13:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.285 16:13:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.285 16:13:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.285 16:13:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.285 16:13:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.285 16:13:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7869952 kB' 'MemAvailable: 9425388 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467140 kB' 'Inactive: 1421548 kB' 'Active(anon): 127916 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118996 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161528 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98344 kB' 'KernelStack: 6512 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:10.285 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.286 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.286 16:13:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.287 16:13:29 -- setup/common.sh@33 -- # echo 0 00:04:10.287 16:13:29 -- setup/common.sh@33 -- # return 0 00:04:10.287 16:13:29 -- setup/hugepages.sh@100 -- # resv=0 00:04:10.287 16:13:29 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.287 nr_hugepages=1024 00:04:10.287 resv_hugepages=0 00:04:10.287 16:13:29 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.287 surplus_hugepages=0 00:04:10.287 16:13:29 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.287 anon_hugepages=0 00:04:10.287 16:13:29 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.287 16:13:29 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.287 16:13:29 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.287 16:13:29 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.287 16:13:29 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.287 16:13:29 -- setup/common.sh@18 -- # local node= 00:04:10.287 16:13:29 -- setup/common.sh@19 -- # local var val 00:04:10.287 16:13:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.287 16:13:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.287 16:13:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.287 16:13:29 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.287 16:13:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.287 16:13:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7869952 kB' 'MemAvailable: 9425388 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467048 kB' 'Inactive: 1421548 kB' 'Active(anon): 127824 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118904 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161524 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98340 kB' 'KernelStack: 6496 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.287 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.287 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.288 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.288 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.289 16:13:29 -- setup/common.sh@33 -- # echo 1024 00:04:10.289 16:13:29 -- setup/common.sh@33 -- # return 0 00:04:10.289 16:13:29 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.289 16:13:29 -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.289 16:13:29 -- setup/hugepages.sh@27 -- # local node 00:04:10.289 16:13:29 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.289 16:13:29 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.289 16:13:29 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:10.289 16:13:29 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.289 16:13:29 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.289 16:13:29 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.289 16:13:29 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.289 16:13:29 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.289 16:13:29 -- setup/common.sh@18 -- # local node=0 00:04:10.289 16:13:29 -- setup/common.sh@19 -- # local var val 00:04:10.289 16:13:29 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.289 16:13:29 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.289 16:13:29 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.289 16:13:29 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.289 16:13:29 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.289 16:13:29 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7869952 kB' 'MemUsed: 4367144 kB' 'SwapCached: 0 kB' 'Active: 466664 kB' 'Inactive: 1421548 kB' 'Active(anon): 127440 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1771264 kB' 'Mapped: 50700 kB' 'AnonPages: 118548 kB' 'Shmem: 10492 kB' 'KernelStack: 6496 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63184 kB' 'Slab: 161524 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98340 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.289 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.289 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # continue 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.290 16:13:29 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.290 16:13:29 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.290 16:13:29 -- setup/common.sh@33 -- # echo 0 00:04:10.290 16:13:29 -- setup/common.sh@33 -- # return 0 00:04:10.290 16:13:29 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.290 16:13:29 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.290 16:13:29 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.290 node0=1024 expecting 1024 00:04:10.290 16:13:29 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.290 16:13:29 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.290 16:13:29 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.290 ************************************ 00:04:10.290 END TEST even_2G_alloc 00:04:10.290 ************************************ 00:04:10.290 00:04:10.290 real 0m0.575s 00:04:10.290 user 0m0.249s 00:04:10.290 sys 0m0.338s 00:04:10.290 16:13:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:10.290 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:04:10.290 16:13:29 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:10.290 16:13:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.290 16:13:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.290 16:13:29 -- common/autotest_common.sh@10 -- # set +x 00:04:10.290 ************************************ 00:04:10.290 START TEST odd_alloc 00:04:10.290 ************************************ 00:04:10.290 16:13:29 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:10.290 16:13:29 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:10.290 16:13:29 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:10.290 16:13:29 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.290 16:13:29 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.290 16:13:29 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:10.290 16:13:29 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.290 16:13:29 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.290 16:13:29 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.290 16:13:29 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:10.290 16:13:29 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.290 16:13:29 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.290 16:13:29 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.290 16:13:29 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.290 16:13:29 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.290 16:13:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.290 16:13:29 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:04:10.290 16:13:29 -- setup/hugepages.sh@83 -- # : 0 00:04:10.290 16:13:29 -- setup/hugepages.sh@84 -- # : 0 00:04:10.290 16:13:29 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.290 16:13:29 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:10.290 16:13:29 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:10.290 16:13:29 -- setup/hugepages.sh@160 -- # setup output 00:04:10.290 16:13:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.290 16:13:29 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:10.866 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.866 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.866 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.867 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.867 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:10.867 16:13:30 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:10.867 16:13:30 -- setup/hugepages.sh@89 -- # local node 00:04:10.867 16:13:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.867 16:13:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.867 16:13:30 -- setup/hugepages.sh@92 -- # local surp 00:04:10.867 16:13:30 -- setup/hugepages.sh@93 -- # local resv 00:04:10.867 16:13:30 -- setup/hugepages.sh@94 -- # local anon 00:04:10.867 16:13:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.867 16:13:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.867 16:13:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.867 16:13:30 -- setup/common.sh@18 -- # local node= 00:04:10.867 16:13:30 -- setup/common.sh@19 -- # local var val 00:04:10.867 16:13:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.867 16:13:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.867 16:13:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.867 16:13:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.867 16:13:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.867 16:13:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7885372 kB' 'MemAvailable: 9440808 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467720 kB' 'Inactive: 1421548 kB' 'Active(anon): 128496 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119408 kB' 'Mapped: 50668 kB' 'Shmem: 10492 kB' 'KReclaimable: 63184 kB' 'Slab: 161624 kB' 'SReclaimable: 63184 kB' 'SUnreclaim: 98440 kB' 'KernelStack: 6604 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.867 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.867 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.868 16:13:30 -- setup/common.sh@33 -- # echo 0 00:04:10.868 16:13:30 -- setup/common.sh@33 -- # return 0 00:04:10.868 16:13:30 -- setup/hugepages.sh@97 -- # anon=0 00:04:10.868 16:13:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.868 16:13:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.868 16:13:30 -- setup/common.sh@18 -- # local node= 00:04:10.868 16:13:30 -- setup/common.sh@19 -- # local var val 00:04:10.868 16:13:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.868 16:13:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.868 16:13:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.868 16:13:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.868 16:13:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.868 16:13:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7885000 kB' 'MemAvailable: 9440444 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467180 kB' 'Inactive: 1421548 kB' 'Active(anon): 127956 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 119092 kB' 'Mapped: 50736 kB' 'Shmem: 10492 kB' 'KReclaimable: 63200 kB' 'Slab: 161636 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98436 kB' 'KernelStack: 6524 kB' 'PageTables: 3880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.868 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.868 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.869 16:13:30 -- setup/common.sh@33 -- # echo 0 00:04:10.869 16:13:30 -- setup/common.sh@33 -- # return 0 00:04:10.869 16:13:30 -- setup/hugepages.sh@99 -- # surp=0 00:04:10.869 16:13:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.869 16:13:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.869 16:13:30 -- setup/common.sh@18 -- # local node= 00:04:10.869 16:13:30 -- setup/common.sh@19 -- # local var val 00:04:10.869 16:13:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.869 16:13:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.869 16:13:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.869 16:13:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.869 16:13:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.869 16:13:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7885084 kB' 'MemAvailable: 9440528 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 466948 kB' 'Inactive: 1421548 kB' 'Active(anon): 127724 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118820 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63200 kB' 'Slab: 161648 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98448 kB' 'KernelStack: 6496 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.869 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.869 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.870 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.870 16:13:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.871 16:13:30 -- setup/common.sh@33 -- # echo 0 00:04:10.871 16:13:30 -- setup/common.sh@33 -- # return 0 00:04:10.871 nr_hugepages=1025 00:04:10.871 resv_hugepages=0 00:04:10.871 surplus_hugepages=0 00:04:10.871 anon_hugepages=0 00:04:10.871 16:13:30 -- setup/hugepages.sh@100 -- # resv=0 00:04:10.871 16:13:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:10.871 16:13:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.871 16:13:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.871 16:13:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.871 16:13:30 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.871 16:13:30 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:10.871 16:13:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.871 16:13:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.871 16:13:30 -- setup/common.sh@18 -- # local node= 00:04:10.871 16:13:30 -- setup/common.sh@19 -- # local var val 00:04:10.871 16:13:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.871 16:13:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.871 16:13:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.871 16:13:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.871 16:13:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.871 16:13:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7885084 kB' 'MemAvailable: 9440528 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 466752 kB' 'Inactive: 1421548 kB' 'Active(anon): 127528 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'AnonPages: 118572 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63200 kB' 'Slab: 161648 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98448 kB' 'KernelStack: 6496 kB' 'PageTables: 3844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13457552 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.871 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.871 16:13:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.872 16:13:30 -- setup/common.sh@33 -- # echo 1025 00:04:10.872 16:13:30 -- setup/common.sh@33 -- # return 0 00:04:10.872 16:13:30 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:10.872 16:13:30 -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.872 16:13:30 -- setup/hugepages.sh@27 -- # local node 00:04:10.872 16:13:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.872 16:13:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:04:10.872 16:13:30 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:10.872 16:13:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.872 16:13:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.872 16:13:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.872 16:13:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.872 16:13:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.872 16:13:30 -- setup/common.sh@18 -- # local node=0 00:04:10.872 16:13:30 -- setup/common.sh@19 -- # local var val 00:04:10.872 16:13:30 -- setup/common.sh@20 -- # local mem_f mem 00:04:10.872 16:13:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.872 16:13:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.872 16:13:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.872 16:13:30 -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.872 16:13:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7885084 kB' 'MemUsed: 4352012 kB' 'SwapCached: 0 kB' 'Active: 466968 kB' 'Inactive: 1421548 kB' 'Active(anon): 127744 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 300 kB' 'Writeback: 0 kB' 'FilePages: 1771264 kB' 'Mapped: 50700 kB' 'AnonPages: 118788 kB' 'Shmem: 10492 kB' 'KernelStack: 6548 kB' 'PageTables: 3796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63200 kB' 'Slab: 161648 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98448 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.872 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.872 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # continue 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # IFS=': ' 00:04:10.873 16:13:30 -- setup/common.sh@31 -- # read -r var val _ 00:04:10.873 16:13:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.873 16:13:30 -- setup/common.sh@33 -- # echo 0 00:04:10.873 16:13:30 -- setup/common.sh@33 -- # return 0 00:04:10.873 16:13:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.873 16:13:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.873 16:13:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.873 16:13:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.873 16:13:30 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:04:10.873 node0=1025 expecting 1025 00:04:10.873 16:13:30 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:04:10.873 00:04:10.873 real 0m0.593s 00:04:10.873 user 0m0.252s 00:04:10.873 sys 0m0.352s 00:04:10.873 16:13:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:10.873 ************************************ 00:04:10.873 END TEST odd_alloc 00:04:10.873 ************************************ 00:04:10.873 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:04:10.873 16:13:30 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:10.873 16:13:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:10.873 16:13:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:10.873 16:13:30 -- common/autotest_common.sh@10 -- # set +x 00:04:10.873 ************************************ 00:04:10.873 START TEST custom_alloc 00:04:10.873 ************************************ 00:04:10.873 16:13:30 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:10.873 16:13:30 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:10.873 16:13:30 -- setup/hugepages.sh@169 -- # local node 00:04:10.873 16:13:30 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:10.873 16:13:30 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:10.873 16:13:30 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:10.873 16:13:30 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:10.873 16:13:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:10.873 16:13:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:10.873 16:13:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:10.873 16:13:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:10.873 16:13:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:10.873 16:13:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.873 16:13:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.873 16:13:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.874 16:13:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.874 16:13:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.874 16:13:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.874 16:13:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.874 16:13:30 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:10.874 16:13:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.874 16:13:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:10.874 16:13:30 -- setup/hugepages.sh@83 -- # : 0 00:04:10.874 16:13:30 -- setup/hugepages.sh@84 -- # : 0 00:04:10.874 16:13:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:10.874 16:13:30 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:10.874 16:13:30 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:04:10.874 16:13:30 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:10.874 16:13:30 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:10.874 16:13:30 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:10.874 16:13:30 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:10.874 16:13:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:10.874 16:13:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:10.874 16:13:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:10.874 16:13:30 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:10.874 16:13:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:10.874 16:13:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:10.874 16:13:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:10.874 16:13:30 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:10.874 16:13:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:10.874 16:13:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:10.874 16:13:30 -- setup/hugepages.sh@78 -- # return 0 00:04:10.874 16:13:30 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:04:10.874 16:13:30 -- setup/hugepages.sh@187 -- # setup output 00:04:10.874 16:13:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.874 16:13:30 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.451 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.451 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.451 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.451 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.451 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.451 16:13:31 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:04:11.451 16:13:31 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:11.451 16:13:31 -- setup/hugepages.sh@89 -- # local node 00:04:11.451 16:13:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:11.451 16:13:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:11.451 16:13:31 -- setup/hugepages.sh@92 -- # local surp 00:04:11.451 16:13:31 -- setup/hugepages.sh@93 -- # local resv 00:04:11.451 16:13:31 -- setup/hugepages.sh@94 -- # local anon 00:04:11.451 16:13:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:11.451 16:13:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:11.451 16:13:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:11.451 16:13:31 -- setup/common.sh@18 -- # local node= 00:04:11.451 16:13:31 -- setup/common.sh@19 -- # local var val 00:04:11.451 16:13:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.451 16:13:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.451 16:13:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.451 16:13:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.451 16:13:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.451 16:13:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8943336 kB' 'MemAvailable: 10498780 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467656 kB' 'Inactive: 1421548 kB' 'Active(anon): 128432 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119512 kB' 'Mapped: 50612 kB' 'Shmem: 10492 kB' 'KReclaimable: 63200 kB' 'Slab: 161608 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98408 kB' 'KernelStack: 6568 kB' 'PageTables: 3920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.451 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.451 16:13:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:11.452 16:13:31 -- setup/common.sh@33 -- # echo 0 00:04:11.452 16:13:31 -- setup/common.sh@33 -- # return 0 00:04:11.452 16:13:31 -- setup/hugepages.sh@97 -- # anon=0 00:04:11.452 16:13:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:11.452 16:13:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.452 16:13:31 -- setup/common.sh@18 -- # local node= 00:04:11.452 16:13:31 -- setup/common.sh@19 -- # local var val 00:04:11.452 16:13:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.452 16:13:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.452 16:13:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.452 16:13:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.452 16:13:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.452 16:13:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8943336 kB' 'MemAvailable: 10498780 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 466940 kB' 'Inactive: 1421548 kB' 'Active(anon): 127716 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118800 kB' 'Mapped: 50672 kB' 'Shmem: 10492 kB' 'KReclaimable: 63200 kB' 'Slab: 161612 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98412 kB' 'KernelStack: 6536 kB' 'PageTables: 3824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.452 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.452 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.453 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.453 16:13:31 -- setup/common.sh@33 -- # echo 0 00:04:11.453 16:13:31 -- setup/common.sh@33 -- # return 0 00:04:11.453 16:13:31 -- setup/hugepages.sh@99 -- # surp=0 00:04:11.453 16:13:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:11.453 16:13:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:11.453 16:13:31 -- setup/common.sh@18 -- # local node= 00:04:11.453 16:13:31 -- setup/common.sh@19 -- # local var val 00:04:11.453 16:13:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.453 16:13:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.453 16:13:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.453 16:13:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.453 16:13:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.453 16:13:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.453 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8943844 kB' 'MemAvailable: 10499288 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 466640 kB' 'Inactive: 1421548 kB' 'Active(anon): 127416 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118760 kB' 'Mapped: 50672 kB' 'Shmem: 10492 kB' 'KReclaimable: 63200 kB' 'Slab: 161608 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98408 kB' 'KernelStack: 6504 kB' 'PageTables: 3684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.454 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.454 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:11.455 16:13:31 -- setup/common.sh@33 -- # echo 0 00:04:11.455 16:13:31 -- setup/common.sh@33 -- # return 0 00:04:11.455 nr_hugepages=512 00:04:11.455 16:13:31 -- setup/hugepages.sh@100 -- # resv=0 00:04:11.455 16:13:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:04:11.455 resv_hugepages=0 00:04:11.455 surplus_hugepages=0 00:04:11.455 anon_hugepages=0 00:04:11.455 16:13:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:11.455 16:13:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:11.455 16:13:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:11.455 16:13:31 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:11.455 16:13:31 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:04:11.455 16:13:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:11.455 16:13:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:11.455 16:13:31 -- setup/common.sh@18 -- # local node= 00:04:11.455 16:13:31 -- setup/common.sh@19 -- # local var val 00:04:11.455 16:13:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.455 16:13:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.455 16:13:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:11.455 16:13:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:11.455 16:13:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.455 16:13:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8944196 kB' 'MemAvailable: 10499640 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 466616 kB' 'Inactive: 1421548 kB' 'Active(anon): 127392 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118740 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63200 kB' 'Slab: 161640 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98440 kB' 'KernelStack: 6480 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13982864 kB' 'Committed_AS: 314048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55592 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.455 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.455 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.456 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.456 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:11.456 16:13:31 -- setup/common.sh@33 -- # echo 512 00:04:11.456 16:13:31 -- setup/common.sh@33 -- # return 0 00:04:11.456 16:13:31 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:04:11.456 16:13:31 -- setup/hugepages.sh@112 -- # get_nodes 00:04:11.456 16:13:31 -- setup/hugepages.sh@27 -- # local node 00:04:11.456 16:13:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:11.456 16:13:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:11.456 16:13:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:11.456 16:13:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:11.456 16:13:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:11.456 16:13:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:11.456 16:13:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:11.457 16:13:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:11.457 16:13:31 -- setup/common.sh@18 -- # local node=0 00:04:11.457 16:13:31 -- setup/common.sh@19 -- # local var val 00:04:11.457 16:13:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:11.457 16:13:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:11.457 16:13:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:11.457 16:13:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:11.457 16:13:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:11.457 16:13:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 8944196 kB' 'MemUsed: 3292900 kB' 'SwapCached: 0 kB' 'Active: 466876 kB' 'Inactive: 1421548 kB' 'Active(anon): 127652 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771264 kB' 'Mapped: 50700 kB' 'AnonPages: 118740 kB' 'Shmem: 10492 kB' 'KernelStack: 6480 kB' 'PageTables: 3800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63200 kB' 'Slab: 161640 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.457 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.457 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.458 16:13:31 -- setup/common.sh@32 -- # continue 00:04:11.458 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:11.458 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:11.458 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:11.458 16:13:31 -- setup/common.sh@33 -- # echo 0 00:04:11.458 16:13:31 -- setup/common.sh@33 -- # return 0 00:04:11.718 node0=512 expecting 512 00:04:11.718 ************************************ 00:04:11.718 END TEST custom_alloc 00:04:11.718 ************************************ 00:04:11.718 16:13:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:11.718 16:13:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:11.718 16:13:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:11.718 16:13:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:11.718 16:13:31 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:11.718 16:13:31 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:11.718 00:04:11.718 real 0m0.610s 00:04:11.718 user 0m0.251s 00:04:11.718 sys 0m0.361s 00:04:11.718 16:13:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.718 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:04:11.718 16:13:31 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:11.718 16:13:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.718 16:13:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.718 16:13:31 -- common/autotest_common.sh@10 -- # set +x 00:04:11.718 ************************************ 00:04:11.718 START TEST no_shrink_alloc 00:04:11.718 ************************************ 00:04:11.718 16:13:31 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:11.718 16:13:31 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:11.718 16:13:31 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:11.718 16:13:31 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:11.718 16:13:31 -- setup/hugepages.sh@51 -- # shift 00:04:11.718 16:13:31 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:11.718 16:13:31 -- setup/hugepages.sh@52 -- # local node_ids 00:04:11.718 16:13:31 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:11.718 16:13:31 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:11.718 16:13:31 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:11.718 16:13:31 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:11.718 16:13:31 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:11.718 16:13:31 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:11.718 16:13:31 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:04:11.718 16:13:31 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:11.718 16:13:31 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:11.718 16:13:31 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:11.718 16:13:31 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:11.718 16:13:31 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:11.718 16:13:31 -- setup/hugepages.sh@73 -- # return 0 00:04:11.718 16:13:31 -- setup/hugepages.sh@198 -- # setup output 00:04:11.718 16:13:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.718 16:13:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:11.983 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:11.983 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.983 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.983 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:11.983 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.255 16:13:31 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:12.255 16:13:31 -- setup/hugepages.sh@89 -- # local node 00:04:12.255 16:13:31 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.255 16:13:31 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.255 16:13:31 -- setup/hugepages.sh@92 -- # local surp 00:04:12.255 16:13:31 -- setup/hugepages.sh@93 -- # local resv 00:04:12.256 16:13:31 -- setup/hugepages.sh@94 -- # local anon 00:04:12.256 16:13:31 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.256 16:13:31 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.256 16:13:31 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.256 16:13:31 -- setup/common.sh@18 -- # local node= 00:04:12.256 16:13:31 -- setup/common.sh@19 -- # local var val 00:04:12.256 16:13:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.256 16:13:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.256 16:13:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.256 16:13:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.256 16:13:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.256 16:13:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7896424 kB' 'MemAvailable: 9451868 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467772 kB' 'Inactive: 1421548 kB' 'Active(anon): 128548 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119412 kB' 'Mapped: 50876 kB' 'Shmem: 10492 kB' 'KReclaimable: 63200 kB' 'Slab: 161656 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98456 kB' 'KernelStack: 6584 kB' 'PageTables: 3968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314248 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55608 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.256 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.256 16:13:31 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.257 16:13:31 -- setup/common.sh@33 -- # echo 0 00:04:12.257 16:13:31 -- setup/common.sh@33 -- # return 0 00:04:12.257 16:13:31 -- setup/hugepages.sh@97 -- # anon=0 00:04:12.257 16:13:31 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.257 16:13:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.257 16:13:31 -- setup/common.sh@18 -- # local node= 00:04:12.257 16:13:31 -- setup/common.sh@19 -- # local var val 00:04:12.257 16:13:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.257 16:13:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.257 16:13:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.257 16:13:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.257 16:13:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.257 16:13:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7896424 kB' 'MemAvailable: 9451868 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467324 kB' 'Inactive: 1421548 kB' 'Active(anon): 128100 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119220 kB' 'Mapped: 50876 kB' 'Shmem: 10492 kB' 'KReclaimable: 63200 kB' 'Slab: 161664 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98464 kB' 'KernelStack: 6536 kB' 'PageTables: 3792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314248 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.257 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.257 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.258 16:13:31 -- setup/common.sh@33 -- # echo 0 00:04:12.258 16:13:31 -- setup/common.sh@33 -- # return 0 00:04:12.258 16:13:31 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.258 16:13:31 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.258 16:13:31 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.258 16:13:31 -- setup/common.sh@18 -- # local node= 00:04:12.258 16:13:31 -- setup/common.sh@19 -- # local var val 00:04:12.258 16:13:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.258 16:13:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.258 16:13:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.258 16:13:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.258 16:13:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.258 16:13:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7896424 kB' 'MemAvailable: 9451868 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467192 kB' 'Inactive: 1421548 kB' 'Active(anon): 127968 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119044 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63200 kB' 'Slab: 161676 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98476 kB' 'KernelStack: 6496 kB' 'PageTables: 3860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314248 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.258 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.258 16:13:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.259 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.259 16:13:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.260 16:13:31 -- setup/common.sh@33 -- # echo 0 00:04:12.260 16:13:31 -- setup/common.sh@33 -- # return 0 00:04:12.260 16:13:31 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.260 16:13:31 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.260 nr_hugepages=1024 00:04:12.260 16:13:31 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.260 resv_hugepages=0 00:04:12.260 16:13:31 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.260 surplus_hugepages=0 00:04:12.260 16:13:31 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.260 anon_hugepages=0 00:04:12.260 16:13:31 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.260 16:13:31 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.260 16:13:31 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.260 16:13:31 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.260 16:13:31 -- setup/common.sh@18 -- # local node= 00:04:12.260 16:13:31 -- setup/common.sh@19 -- # local var val 00:04:12.260 16:13:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.260 16:13:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.260 16:13:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.260 16:13:31 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.260 16:13:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.260 16:13:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7896424 kB' 'MemAvailable: 9451868 kB' 'Buffers: 3704 kB' 'Cached: 1767560 kB' 'SwapCached: 0 kB' 'Active: 467064 kB' 'Inactive: 1421548 kB' 'Active(anon): 127840 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118920 kB' 'Mapped: 50700 kB' 'Shmem: 10492 kB' 'KReclaimable: 63200 kB' 'Slab: 161680 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98480 kB' 'KernelStack: 6528 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 314248 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55576 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.260 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.260 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.261 16:13:31 -- setup/common.sh@33 -- # echo 1024 00:04:12.261 16:13:31 -- setup/common.sh@33 -- # return 0 00:04:12.261 16:13:31 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.261 16:13:31 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.261 16:13:31 -- setup/hugepages.sh@27 -- # local node 00:04:12.261 16:13:31 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.261 16:13:31 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:12.261 16:13:31 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.261 16:13:31 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.261 16:13:31 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.261 16:13:31 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.261 16:13:31 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.261 16:13:31 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.261 16:13:31 -- setup/common.sh@18 -- # local node=0 00:04:12.261 16:13:31 -- setup/common.sh@19 -- # local var val 00:04:12.261 16:13:31 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.261 16:13:31 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.261 16:13:31 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.261 16:13:31 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.261 16:13:31 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.261 16:13:31 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7896424 kB' 'MemUsed: 4340672 kB' 'SwapCached: 0 kB' 'Active: 467028 kB' 'Inactive: 1421548 kB' 'Active(anon): 127804 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421548 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771264 kB' 'Mapped: 50700 kB' 'AnonPages: 118928 kB' 'Shmem: 10492 kB' 'KernelStack: 6528 kB' 'PageTables: 3948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63200 kB' 'Slab: 161680 kB' 'SReclaimable: 63200 kB' 'SUnreclaim: 98480 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.261 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.261 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # continue 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.262 16:13:31 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.262 16:13:31 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.262 16:13:31 -- setup/common.sh@33 -- # echo 0 00:04:12.262 16:13:31 -- setup/common.sh@33 -- # return 0 00:04:12.262 16:13:31 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.262 16:13:31 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.262 16:13:31 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.262 node0=1024 expecting 1024 00:04:12.262 16:13:31 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.262 16:13:31 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:12.262 16:13:31 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:12.262 16:13:31 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:12.262 16:13:31 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:12.262 16:13:31 -- setup/hugepages.sh@202 -- # setup output 00:04:12.262 16:13:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.262 16:13:31 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:12.523 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.788 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.788 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.788 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.788 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:04:12.788 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:12.788 16:13:32 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:12.788 16:13:32 -- setup/hugepages.sh@89 -- # local node 00:04:12.788 16:13:32 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.788 16:13:32 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.788 16:13:32 -- setup/hugepages.sh@92 -- # local surp 00:04:12.788 16:13:32 -- setup/hugepages.sh@93 -- # local resv 00:04:12.788 16:13:32 -- setup/hugepages.sh@94 -- # local anon 00:04:12.788 16:13:32 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.788 16:13:32 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.788 16:13:32 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.788 16:13:32 -- setup/common.sh@18 -- # local node= 00:04:12.788 16:13:32 -- setup/common.sh@19 -- # local var val 00:04:12.788 16:13:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.788 16:13:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.788 16:13:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.788 16:13:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.788 16:13:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.788 16:13:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.788 16:13:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7894736 kB' 'MemAvailable: 9450180 kB' 'Buffers: 3704 kB' 'Cached: 1767564 kB' 'SwapCached: 0 kB' 'Active: 466328 kB' 'Inactive: 1421552 kB' 'Active(anon): 127104 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 118268 kB' 'Mapped: 50264 kB' 'Shmem: 10492 kB' 'KReclaimable: 63192 kB' 'Slab: 161456 kB' 'SReclaimable: 63192 kB' 'SUnreclaim: 98264 kB' 'KernelStack: 6552 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304024 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55624 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.788 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.788 16:13:32 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.789 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.789 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.790 16:13:32 -- setup/common.sh@33 -- # echo 0 00:04:12.790 16:13:32 -- setup/common.sh@33 -- # return 0 00:04:12.790 16:13:32 -- setup/hugepages.sh@97 -- # anon=0 00:04:12.790 16:13:32 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.790 16:13:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.790 16:13:32 -- setup/common.sh@18 -- # local node= 00:04:12.790 16:13:32 -- setup/common.sh@19 -- # local var val 00:04:12.790 16:13:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.790 16:13:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.790 16:13:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.790 16:13:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.790 16:13:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.790 16:13:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7894736 kB' 'MemAvailable: 9450180 kB' 'Buffers: 3704 kB' 'Cached: 1767564 kB' 'SwapCached: 0 kB' 'Active: 465852 kB' 'Inactive: 1421552 kB' 'Active(anon): 126628 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117468 kB' 'Mapped: 49908 kB' 'Shmem: 10492 kB' 'KReclaimable: 63192 kB' 'Slab: 161452 kB' 'SReclaimable: 63192 kB' 'SUnreclaim: 98260 kB' 'KernelStack: 6452 kB' 'PageTables: 3708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304024 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.790 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.790 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.791 16:13:32 -- setup/common.sh@33 -- # echo 0 00:04:12.791 16:13:32 -- setup/common.sh@33 -- # return 0 00:04:12.791 16:13:32 -- setup/hugepages.sh@99 -- # surp=0 00:04:12.791 16:13:32 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.791 16:13:32 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.791 16:13:32 -- setup/common.sh@18 -- # local node= 00:04:12.791 16:13:32 -- setup/common.sh@19 -- # local var val 00:04:12.791 16:13:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.791 16:13:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.791 16:13:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.791 16:13:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.791 16:13:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.791 16:13:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.791 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.791 16:13:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7894736 kB' 'MemAvailable: 9450180 kB' 'Buffers: 3704 kB' 'Cached: 1767564 kB' 'SwapCached: 0 kB' 'Active: 465568 kB' 'Inactive: 1421552 kB' 'Active(anon): 126344 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117436 kB' 'Mapped: 49852 kB' 'Shmem: 10492 kB' 'KReclaimable: 63192 kB' 'Slab: 161468 kB' 'SReclaimable: 63192 kB' 'SUnreclaim: 98276 kB' 'KernelStack: 6480 kB' 'PageTables: 3724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304024 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.792 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.792 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.793 16:13:32 -- setup/common.sh@33 -- # echo 0 00:04:12.793 16:13:32 -- setup/common.sh@33 -- # return 0 00:04:12.793 nr_hugepages=1024 00:04:12.793 resv_hugepages=0 00:04:12.793 surplus_hugepages=0 00:04:12.793 16:13:32 -- setup/hugepages.sh@100 -- # resv=0 00:04:12.793 16:13:32 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.793 16:13:32 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.793 16:13:32 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.793 16:13:32 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.793 anon_hugepages=0 00:04:12.793 16:13:32 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.793 16:13:32 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.793 16:13:32 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.793 16:13:32 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.793 16:13:32 -- setup/common.sh@18 -- # local node= 00:04:12.793 16:13:32 -- setup/common.sh@19 -- # local var val 00:04:12.793 16:13:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.793 16:13:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.793 16:13:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.793 16:13:32 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.793 16:13:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.793 16:13:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7894736 kB' 'MemAvailable: 9450180 kB' 'Buffers: 3704 kB' 'Cached: 1767564 kB' 'SwapCached: 0 kB' 'Active: 465428 kB' 'Inactive: 1421552 kB' 'Active(anon): 126204 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 117340 kB' 'Mapped: 49852 kB' 'Shmem: 10492 kB' 'KReclaimable: 63192 kB' 'Slab: 161464 kB' 'SReclaimable: 63192 kB' 'SUnreclaim: 98272 kB' 'KernelStack: 6448 kB' 'PageTables: 3628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13458576 kB' 'Committed_AS: 304024 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 55528 kB' 'VmallocChunk: 0 kB' 'Percpu: 6528 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 198508 kB' 'DirectMap2M: 5044224 kB' 'DirectMap1G: 9437184 kB' 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.793 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.793 16:13:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.794 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.794 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.795 16:13:32 -- setup/common.sh@33 -- # echo 1024 00:04:12.795 16:13:32 -- setup/common.sh@33 -- # return 0 00:04:12.795 16:13:32 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.795 16:13:32 -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.795 16:13:32 -- setup/hugepages.sh@27 -- # local node 00:04:12.795 16:13:32 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.795 16:13:32 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:12.795 16:13:32 -- setup/hugepages.sh@32 -- # no_nodes=1 00:04:12.795 16:13:32 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.795 16:13:32 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.795 16:13:32 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.795 16:13:32 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.795 16:13:32 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.795 16:13:32 -- setup/common.sh@18 -- # local node=0 00:04:12.795 16:13:32 -- setup/common.sh@19 -- # local var val 00:04:12.795 16:13:32 -- setup/common.sh@20 -- # local mem_f mem 00:04:12.795 16:13:32 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.795 16:13:32 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.795 16:13:32 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.795 16:13:32 -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.795 16:13:32 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12237096 kB' 'MemFree: 7894736 kB' 'MemUsed: 4342360 kB' 'SwapCached: 0 kB' 'Active: 465520 kB' 'Inactive: 1421552 kB' 'Active(anon): 126296 kB' 'Inactive(anon): 0 kB' 'Active(file): 339224 kB' 'Inactive(file): 1421552 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 1771268 kB' 'Mapped: 49852 kB' 'AnonPages: 117376 kB' 'Shmem: 10492 kB' 'KernelStack: 6464 kB' 'PageTables: 3676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63192 kB' 'Slab: 161464 kB' 'SReclaimable: 63192 kB' 'SUnreclaim: 98272 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.795 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.795 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # continue 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # IFS=': ' 00:04:12.796 16:13:32 -- setup/common.sh@31 -- # read -r var val _ 00:04:12.796 16:13:32 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.796 16:13:32 -- setup/common.sh@33 -- # echo 0 00:04:12.796 16:13:32 -- setup/common.sh@33 -- # return 0 00:04:12.796 16:13:32 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.796 16:13:32 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.796 16:13:32 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.796 16:13:32 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.796 16:13:32 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:12.796 node0=1024 expecting 1024 00:04:12.796 16:13:32 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:12.796 00:04:12.796 real 0m1.192s 00:04:12.796 user 0m0.496s 00:04:12.796 sys 0m0.705s 00:04:12.796 16:13:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:12.796 16:13:32 -- common/autotest_common.sh@10 -- # set +x 00:04:12.796 ************************************ 00:04:12.796 END TEST no_shrink_alloc 00:04:12.796 ************************************ 00:04:12.796 16:13:32 -- setup/hugepages.sh@217 -- # clear_hp 00:04:12.796 16:13:32 -- setup/hugepages.sh@37 -- # local node hp 00:04:12.796 16:13:32 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:12.796 16:13:32 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.796 16:13:32 -- setup/hugepages.sh@41 -- # echo 0 00:04:12.796 16:13:32 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:12.796 16:13:32 -- setup/hugepages.sh@41 -- # echo 0 00:04:12.796 16:13:32 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:12.796 16:13:32 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:12.796 ************************************ 00:04:12.796 END TEST hugepages 00:04:12.796 ************************************ 00:04:12.796 00:04:12.796 real 0m5.424s 00:04:12.796 user 0m2.194s 00:04:12.796 sys 0m2.993s 00:04:12.796 16:13:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:12.796 16:13:32 -- common/autotest_common.sh@10 -- # set +x 00:04:13.057 16:13:32 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:13.057 16:13:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:13.057 16:13:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:13.057 16:13:32 -- common/autotest_common.sh@10 -- # set +x 00:04:13.057 ************************************ 00:04:13.057 START TEST driver 00:04:13.057 ************************************ 00:04:13.057 16:13:32 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:04:13.057 * Looking for test storage... 00:04:13.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:13.057 16:13:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:13.057 16:13:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:13.057 16:13:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:13.057 16:13:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:13.057 16:13:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:13.057 16:13:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:13.057 16:13:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:13.057 16:13:32 -- scripts/common.sh@335 -- # IFS=.-: 00:04:13.057 16:13:32 -- scripts/common.sh@335 -- # read -ra ver1 00:04:13.057 16:13:32 -- scripts/common.sh@336 -- # IFS=.-: 00:04:13.057 16:13:32 -- scripts/common.sh@336 -- # read -ra ver2 00:04:13.057 16:13:32 -- scripts/common.sh@337 -- # local 'op=<' 00:04:13.057 16:13:32 -- scripts/common.sh@339 -- # ver1_l=2 00:04:13.057 16:13:32 -- scripts/common.sh@340 -- # ver2_l=1 00:04:13.057 16:13:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:13.057 16:13:32 -- scripts/common.sh@343 -- # case "$op" in 00:04:13.057 16:13:32 -- scripts/common.sh@344 -- # : 1 00:04:13.057 16:13:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:13.057 16:13:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:13.057 16:13:32 -- scripts/common.sh@364 -- # decimal 1 00:04:13.057 16:13:32 -- scripts/common.sh@352 -- # local d=1 00:04:13.057 16:13:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:13.057 16:13:32 -- scripts/common.sh@354 -- # echo 1 00:04:13.057 16:13:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:13.057 16:13:32 -- scripts/common.sh@365 -- # decimal 2 00:04:13.057 16:13:32 -- scripts/common.sh@352 -- # local d=2 00:04:13.057 16:13:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:13.057 16:13:32 -- scripts/common.sh@354 -- # echo 2 00:04:13.057 16:13:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:13.057 16:13:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:13.057 16:13:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:13.057 16:13:32 -- scripts/common.sh@367 -- # return 0 00:04:13.057 16:13:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:13.057 16:13:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:13.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.058 --rc genhtml_branch_coverage=1 00:04:13.058 --rc genhtml_function_coverage=1 00:04:13.058 --rc genhtml_legend=1 00:04:13.058 --rc geninfo_all_blocks=1 00:04:13.058 --rc geninfo_unexecuted_blocks=1 00:04:13.058 00:04:13.058 ' 00:04:13.058 16:13:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:13.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.058 --rc genhtml_branch_coverage=1 00:04:13.058 --rc genhtml_function_coverage=1 00:04:13.058 --rc genhtml_legend=1 00:04:13.058 --rc geninfo_all_blocks=1 00:04:13.058 --rc geninfo_unexecuted_blocks=1 00:04:13.058 00:04:13.058 ' 00:04:13.058 16:13:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:13.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.058 --rc genhtml_branch_coverage=1 00:04:13.058 --rc genhtml_function_coverage=1 00:04:13.058 --rc genhtml_legend=1 00:04:13.058 --rc geninfo_all_blocks=1 00:04:13.058 --rc geninfo_unexecuted_blocks=1 00:04:13.058 00:04:13.058 ' 00:04:13.058 16:13:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:13.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:13.058 --rc genhtml_branch_coverage=1 00:04:13.058 --rc genhtml_function_coverage=1 00:04:13.058 --rc genhtml_legend=1 00:04:13.058 --rc geninfo_all_blocks=1 00:04:13.058 --rc geninfo_unexecuted_blocks=1 00:04:13.058 00:04:13.058 ' 00:04:13.058 16:13:32 -- setup/driver.sh@68 -- # setup reset 00:04:13.058 16:13:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:13.058 16:13:32 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.646 16:13:38 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:19.646 16:13:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.646 16:13:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.646 16:13:38 -- common/autotest_common.sh@10 -- # set +x 00:04:19.646 ************************************ 00:04:19.646 START TEST guess_driver 00:04:19.646 ************************************ 00:04:19.646 16:13:38 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:19.646 16:13:38 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:19.646 16:13:38 -- setup/driver.sh@47 -- # local fail=0 00:04:19.646 16:13:38 -- setup/driver.sh@49 -- # pick_driver 00:04:19.646 16:13:38 -- setup/driver.sh@36 -- # vfio 00:04:19.646 16:13:38 -- setup/driver.sh@21 -- # local iommu_grups 00:04:19.646 16:13:38 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:19.646 16:13:38 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:19.646 16:13:38 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:19.646 16:13:38 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:04:19.646 16:13:38 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:04:19.646 16:13:38 -- setup/driver.sh@32 -- # return 1 00:04:19.646 16:13:38 -- setup/driver.sh@38 -- # uio 00:04:19.646 16:13:38 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:04:19.646 16:13:38 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:04:19.646 16:13:38 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:04:19.646 16:13:38 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:04:19.646 16:13:38 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio.ko.xz 00:04:19.646 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:04:19.646 16:13:38 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:04:19.646 Looking for driver=uio_pci_generic 00:04:19.646 16:13:38 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:04:19.646 16:13:38 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:19.646 16:13:38 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:04:19.646 16:13:38 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.646 16:13:38 -- setup/driver.sh@45 -- # setup output config 00:04:19.646 16:13:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.646 16:13:38 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:19.646 16:13:39 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:04:19.646 16:13:39 -- setup/driver.sh@58 -- # continue 00:04:19.646 16:13:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.906 16:13:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.906 16:13:39 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:19.906 16:13:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.906 16:13:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.906 16:13:39 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:19.906 16:13:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.906 16:13:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.906 16:13:39 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:19.906 16:13:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.906 16:13:39 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:19.906 16:13:39 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:04:19.906 16:13:39 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:19.906 16:13:39 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:19.906 16:13:39 -- setup/driver.sh@65 -- # setup reset 00:04:19.906 16:13:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.906 16:13:39 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.476 00:04:26.476 real 0m6.680s 00:04:26.476 user 0m0.647s 00:04:26.476 sys 0m1.069s 00:04:26.476 16:13:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.476 ************************************ 00:04:26.476 END TEST guess_driver 00:04:26.476 16:13:45 -- common/autotest_common.sh@10 -- # set +x 00:04:26.476 ************************************ 00:04:26.476 ************************************ 00:04:26.476 END TEST driver 00:04:26.476 ************************************ 00:04:26.476 00:04:26.476 real 0m12.734s 00:04:26.476 user 0m1.050s 00:04:26.476 sys 0m1.784s 00:04:26.476 16:13:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.476 16:13:45 -- common/autotest_common.sh@10 -- # set +x 00:04:26.476 16:13:45 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:26.476 16:13:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.476 16:13:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.476 16:13:45 -- common/autotest_common.sh@10 -- # set +x 00:04:26.476 ************************************ 00:04:26.476 START TEST devices 00:04:26.476 ************************************ 00:04:26.476 16:13:45 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:04:26.476 * Looking for test storage... 00:04:26.477 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:04:26.477 16:13:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:26.477 16:13:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:26.477 16:13:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:26.477 16:13:45 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:26.477 16:13:45 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:26.477 16:13:45 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:26.477 16:13:45 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:26.477 16:13:45 -- scripts/common.sh@335 -- # IFS=.-: 00:04:26.477 16:13:45 -- scripts/common.sh@335 -- # read -ra ver1 00:04:26.477 16:13:45 -- scripts/common.sh@336 -- # IFS=.-: 00:04:26.477 16:13:45 -- scripts/common.sh@336 -- # read -ra ver2 00:04:26.477 16:13:45 -- scripts/common.sh@337 -- # local 'op=<' 00:04:26.477 16:13:45 -- scripts/common.sh@339 -- # ver1_l=2 00:04:26.477 16:13:45 -- scripts/common.sh@340 -- # ver2_l=1 00:04:26.477 16:13:45 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:26.477 16:13:45 -- scripts/common.sh@343 -- # case "$op" in 00:04:26.477 16:13:45 -- scripts/common.sh@344 -- # : 1 00:04:26.477 16:13:45 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:26.477 16:13:45 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:26.477 16:13:45 -- scripts/common.sh@364 -- # decimal 1 00:04:26.477 16:13:45 -- scripts/common.sh@352 -- # local d=1 00:04:26.477 16:13:45 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:26.477 16:13:45 -- scripts/common.sh@354 -- # echo 1 00:04:26.477 16:13:45 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:26.477 16:13:45 -- scripts/common.sh@365 -- # decimal 2 00:04:26.477 16:13:45 -- scripts/common.sh@352 -- # local d=2 00:04:26.477 16:13:45 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:26.477 16:13:45 -- scripts/common.sh@354 -- # echo 2 00:04:26.477 16:13:45 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:26.477 16:13:45 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:26.477 16:13:45 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:26.477 16:13:45 -- scripts/common.sh@367 -- # return 0 00:04:26.477 16:13:45 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:26.477 16:13:45 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:26.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.477 --rc genhtml_branch_coverage=1 00:04:26.477 --rc genhtml_function_coverage=1 00:04:26.477 --rc genhtml_legend=1 00:04:26.477 --rc geninfo_all_blocks=1 00:04:26.477 --rc geninfo_unexecuted_blocks=1 00:04:26.477 00:04:26.477 ' 00:04:26.477 16:13:45 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:26.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.477 --rc genhtml_branch_coverage=1 00:04:26.477 --rc genhtml_function_coverage=1 00:04:26.477 --rc genhtml_legend=1 00:04:26.477 --rc geninfo_all_blocks=1 00:04:26.477 --rc geninfo_unexecuted_blocks=1 00:04:26.477 00:04:26.477 ' 00:04:26.477 16:13:45 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:26.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.477 --rc genhtml_branch_coverage=1 00:04:26.477 --rc genhtml_function_coverage=1 00:04:26.477 --rc genhtml_legend=1 00:04:26.477 --rc geninfo_all_blocks=1 00:04:26.477 --rc geninfo_unexecuted_blocks=1 00:04:26.477 00:04:26.477 ' 00:04:26.477 16:13:45 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:26.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:26.477 --rc genhtml_branch_coverage=1 00:04:26.477 --rc genhtml_function_coverage=1 00:04:26.477 --rc genhtml_legend=1 00:04:26.477 --rc geninfo_all_blocks=1 00:04:26.477 --rc geninfo_unexecuted_blocks=1 00:04:26.477 00:04:26.477 ' 00:04:26.477 16:13:45 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:26.477 16:13:45 -- setup/devices.sh@192 -- # setup reset 00:04:26.477 16:13:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:26.477 16:13:45 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.736 16:13:46 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:26.736 16:13:46 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:04:26.736 16:13:46 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:04:26.736 16:13:46 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:04:26.736 16:13:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.736 16:13:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:04:26.736 16:13:46 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:04:26.736 16:13:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.736 16:13:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:04:26.736 16:13:46 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:04:26.736 16:13:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.736 16:13:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:04:26.736 16:13:46 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:04:26.736 16:13:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.736 16:13:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:04:26.736 16:13:46 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:04:26.736 16:13:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.736 16:13:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:04:26.736 16:13:46 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:04:26.736 16:13:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.736 16:13:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:04:26.736 16:13:46 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:04:26.736 16:13:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:04:26.736 16:13:46 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:04:26.736 16:13:46 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:04:26.736 16:13:46 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:26.736 16:13:46 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:04:26.736 16:13:46 -- setup/devices.sh@196 -- # blocks=() 00:04:26.736 16:13:46 -- setup/devices.sh@196 -- # declare -a blocks 00:04:26.736 16:13:46 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:26.736 16:13:46 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:26.736 16:13:46 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:26.736 16:13:46 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.736 16:13:46 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:26.736 16:13:46 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:26.736 16:13:46 -- setup/devices.sh@202 -- # pci=0000:00:09.0 00:04:26.736 16:13:46 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\9\.\0* ]] 00:04:26.736 16:13:46 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:26.736 16:13:46 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:26.736 16:13:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:04:26.736 No valid GPT data, bailing 00:04:26.736 16:13:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:26.736 16:13:46 -- scripts/common.sh@393 -- # pt= 00:04:26.736 16:13:46 -- scripts/common.sh@394 -- # return 1 00:04:26.736 16:13:46 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:26.736 16:13:46 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:26.736 16:13:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:26.736 16:13:46 -- setup/common.sh@80 -- # echo 1073741824 00:04:26.736 16:13:46 -- setup/devices.sh@204 -- # (( 1073741824 >= min_disk_size )) 00:04:26.736 16:13:46 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.736 16:13:46 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:26.736 16:13:46 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:26.736 16:13:46 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:26.736 16:13:46 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:26.736 16:13:46 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:26.736 16:13:46 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:04:26.736 16:13:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:04:26.736 No valid GPT data, bailing 00:04:26.736 16:13:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:26.736 16:13:46 -- scripts/common.sh@393 -- # pt= 00:04:26.736 16:13:46 -- scripts/common.sh@394 -- # return 1 00:04:26.736 16:13:46 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:26.736 16:13:46 -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:26.736 16:13:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:26.736 16:13:46 -- setup/common.sh@80 -- # echo 4294967296 00:04:26.736 16:13:46 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:26.736 16:13:46 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.736 16:13:46 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:26.736 16:13:46 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.736 16:13:46 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:04:26.736 16:13:46 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:26.736 16:13:46 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:26.736 16:13:46 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:26.736 16:13:46 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:04:26.736 16:13:46 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:04:26.736 16:13:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:04:26.736 No valid GPT data, bailing 00:04:26.736 16:13:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:26.995 16:13:46 -- scripts/common.sh@393 -- # pt= 00:04:26.995 16:13:46 -- scripts/common.sh@394 -- # return 1 00:04:26.995 16:13:46 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:04:26.995 16:13:46 -- setup/common.sh@76 -- # local dev=nvme1n2 00:04:26.995 16:13:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:04:26.995 16:13:46 -- setup/common.sh@80 -- # echo 4294967296 00:04:26.995 16:13:46 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:26.995 16:13:46 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.995 16:13:46 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:26.995 16:13:46 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.995 16:13:46 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:04:26.995 16:13:46 -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:26.995 16:13:46 -- setup/devices.sh@202 -- # pci=0000:00:08.0 00:04:26.995 16:13:46 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\8\.\0* ]] 00:04:26.995 16:13:46 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:04:26.995 16:13:46 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:04:26.995 16:13:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:04:26.995 No valid GPT data, bailing 00:04:26.995 16:13:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:26.995 16:13:46 -- scripts/common.sh@393 -- # pt= 00:04:26.995 16:13:46 -- scripts/common.sh@394 -- # return 1 00:04:26.995 16:13:46 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:04:26.995 16:13:46 -- setup/common.sh@76 -- # local dev=nvme1n3 00:04:26.995 16:13:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:04:26.995 16:13:46 -- setup/common.sh@80 -- # echo 4294967296 00:04:26.995 16:13:46 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:04:26.995 16:13:46 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.995 16:13:46 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:08.0 00:04:26.995 16:13:46 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.995 16:13:46 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:04:26.995 16:13:46 -- setup/devices.sh@201 -- # ctrl=nvme2 00:04:26.995 16:13:46 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:04:26.995 16:13:46 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:04:26.995 16:13:46 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:04:26.995 16:13:46 -- scripts/common.sh@380 -- # local block=nvme2n1 pt 00:04:26.995 16:13:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme2n1 00:04:26.995 No valid GPT data, bailing 00:04:26.995 16:13:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:26.995 16:13:46 -- scripts/common.sh@393 -- # pt= 00:04:26.995 16:13:46 -- scripts/common.sh@394 -- # return 1 00:04:26.995 16:13:46 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:04:26.995 16:13:46 -- setup/common.sh@76 -- # local dev=nvme2n1 00:04:26.995 16:13:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:04:26.995 16:13:46 -- setup/common.sh@80 -- # echo 6343335936 00:04:26.995 16:13:46 -- setup/devices.sh@204 -- # (( 6343335936 >= min_disk_size )) 00:04:26.995 16:13:46 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.995 16:13:46 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:04:26.995 16:13:46 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.995 16:13:46 -- setup/devices.sh@201 -- # ctrl=nvme3n1 00:04:26.995 16:13:46 -- setup/devices.sh@201 -- # ctrl=nvme3 00:04:26.995 16:13:46 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:04:26.995 16:13:46 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:04:26.995 16:13:46 -- setup/devices.sh@204 -- # block_in_use nvme3n1 00:04:26.995 16:13:46 -- scripts/common.sh@380 -- # local block=nvme3n1 pt 00:04:26.995 16:13:46 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme3n1 00:04:26.995 No valid GPT data, bailing 00:04:26.995 16:13:46 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:26.995 16:13:46 -- scripts/common.sh@393 -- # pt= 00:04:26.995 16:13:46 -- scripts/common.sh@394 -- # return 1 00:04:26.995 16:13:46 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme3n1 00:04:26.995 16:13:46 -- setup/common.sh@76 -- # local dev=nvme3n1 00:04:26.995 16:13:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme3n1 ]] 00:04:26.995 16:13:46 -- setup/common.sh@80 -- # echo 5368709120 00:04:26.995 16:13:46 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:04:26.995 16:13:46 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.995 16:13:46 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:04:26.995 16:13:46 -- setup/devices.sh@209 -- # (( 5 > 0 )) 00:04:26.995 16:13:46 -- setup/devices.sh@211 -- # declare -r test_disk=nvme1n1 00:04:26.995 16:13:46 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:26.995 16:13:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.995 16:13:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.995 16:13:46 -- common/autotest_common.sh@10 -- # set +x 00:04:26.995 ************************************ 00:04:26.995 START TEST nvme_mount 00:04:26.995 ************************************ 00:04:26.995 16:13:46 -- common/autotest_common.sh@1114 -- # nvme_mount 00:04:26.995 16:13:46 -- setup/devices.sh@95 -- # nvme_disk=nvme1n1 00:04:26.995 16:13:46 -- setup/devices.sh@96 -- # nvme_disk_p=nvme1n1p1 00:04:26.995 16:13:46 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:26.995 16:13:46 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:26.995 16:13:46 -- setup/devices.sh@101 -- # partition_drive nvme1n1 1 00:04:26.995 16:13:46 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:26.995 16:13:46 -- setup/common.sh@40 -- # local part_no=1 00:04:26.995 16:13:46 -- setup/common.sh@41 -- # local size=1073741824 00:04:26.995 16:13:46 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:26.995 16:13:46 -- setup/common.sh@44 -- # parts=() 00:04:26.995 16:13:46 -- setup/common.sh@44 -- # local parts 00:04:26.995 16:13:46 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:26.995 16:13:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.995 16:13:46 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.995 16:13:46 -- setup/common.sh@46 -- # (( part++ )) 00:04:26.995 16:13:46 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.995 16:13:46 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:26.995 16:13:46 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:26.995 16:13:46 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 00:04:28.370 Creating new GPT entries in memory. 00:04:28.370 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:28.370 other utilities. 00:04:28.370 16:13:47 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:28.370 16:13:47 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.370 16:13:47 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.370 16:13:47 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.370 16:13:47 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:29.308 Creating new GPT entries in memory. 00:04:29.308 The operation has completed successfully. 00:04:29.308 16:13:48 -- setup/common.sh@57 -- # (( part++ )) 00:04:29.308 16:13:48 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.308 16:13:48 -- setup/common.sh@62 -- # wait 53714 00:04:29.308 16:13:48 -- setup/devices.sh@102 -- # mkfs /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.308 16:13:48 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:04:29.308 16:13:48 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.308 16:13:48 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1p1 ]] 00:04:29.308 16:13:48 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1p1 00:04:29.308 16:13:48 -- setup/common.sh@72 -- # mount /dev/nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.308 16:13:48 -- setup/devices.sh@105 -- # verify 0000:00:08.0 nvme1n1:nvme1n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:29.308 16:13:48 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:29.308 16:13:48 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1p1 00:04:29.308 16:13:48 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.308 16:13:48 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:29.308 16:13:48 -- setup/devices.sh@53 -- # local found=0 00:04:29.308 16:13:48 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.308 16:13:48 -- setup/devices.sh@56 -- # : 00:04:29.308 16:13:48 -- setup/devices.sh@59 -- # local pci status 00:04:29.309 16:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.309 16:13:48 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:29.309 16:13:48 -- setup/devices.sh@47 -- # setup output config 00:04:29.309 16:13:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.309 16:13:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:29.309 16:13:48 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.309 16:13:48 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.309 16:13:49 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.309 16:13:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.567 16:13:49 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.567 16:13:49 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1\p\1* ]] 00:04:29.567 16:13:49 -- setup/devices.sh@63 -- # found=1 00:04:29.567 16:13:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.567 16:13:49 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.567 16:13:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.567 16:13:49 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.567 16:13:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.826 16:13:49 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:29.826 16:13:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:29.826 16:13:49 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:29.826 16:13:49 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:29.826 16:13:49 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.826 16:13:49 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:29.826 16:13:49 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:29.826 16:13:49 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:29.826 16:13:49 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.826 16:13:49 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:29.826 16:13:49 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:29.826 16:13:49 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:29.826 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:29.826 16:13:49 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:29.826 16:13:49 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:30.086 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:30.086 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:30.086 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:30.086 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:30.086 16:13:49 -- setup/devices.sh@113 -- # mkfs /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:04:30.086 16:13:49 -- setup/common.sh@66 -- # local dev=/dev/nvme1n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:04:30.086 16:13:49 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.086 16:13:49 -- setup/common.sh@70 -- # [[ -e /dev/nvme1n1 ]] 00:04:30.086 16:13:49 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme1n1 1024M 00:04:30.086 16:13:49 -- setup/common.sh@72 -- # mount /dev/nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.086 16:13:49 -- setup/devices.sh@116 -- # verify 0000:00:08.0 nvme1n1:nvme1n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:30.086 16:13:49 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:30.086 16:13:49 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme1n1 00:04:30.086 16:13:49 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.086 16:13:49 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:30.086 16:13:49 -- setup/devices.sh@53 -- # local found=0 00:04:30.086 16:13:49 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.086 16:13:49 -- setup/devices.sh@56 -- # : 00:04:30.086 16:13:49 -- setup/devices.sh@59 -- # local pci status 00:04:30.086 16:13:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.086 16:13:49 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:30.086 16:13:49 -- setup/devices.sh@47 -- # setup output config 00:04:30.086 16:13:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.086 16:13:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:30.086 16:13:49 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.086 16:13:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.343 16:13:49 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.343 16:13:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.602 16:13:50 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.602 16:13:50 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme1n1:nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\1\n\1* ]] 00:04:30.602 16:13:50 -- setup/devices.sh@63 -- # found=1 00:04:30.602 16:13:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.602 16:13:50 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.602 16:13:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.602 16:13:50 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.602 16:13:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.602 16:13:50 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.602 16:13:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.860 16:13:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.860 16:13:50 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:04:30.860 16:13:50 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.860 16:13:50 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.860 16:13:50 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:04:30.860 16:13:50 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:30.860 16:13:50 -- setup/devices.sh@125 -- # verify 0000:00:08.0 data@nvme1n1 '' '' 00:04:30.860 16:13:50 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:30.860 16:13:50 -- setup/devices.sh@49 -- # local mounts=data@nvme1n1 00:04:30.860 16:13:50 -- setup/devices.sh@50 -- # local mount_point= 00:04:30.860 16:13:50 -- setup/devices.sh@51 -- # local test_file= 00:04:30.860 16:13:50 -- setup/devices.sh@53 -- # local found=0 00:04:30.860 16:13:50 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:30.860 16:13:50 -- setup/devices.sh@59 -- # local pci status 00:04:30.860 16:13:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.860 16:13:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:30.860 16:13:50 -- setup/devices.sh@47 -- # setup output config 00:04:30.860 16:13:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.860 16:13:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:30.860 16:13:50 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.860 16:13:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.860 16:13:50 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:30.860 16:13:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.119 16:13:50 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.119 16:13:50 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme1n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\1\n\1* ]] 00:04:31.119 16:13:50 -- setup/devices.sh@63 -- # found=1 00:04:31.119 16:13:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.119 16:13:50 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.119 16:13:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.377 16:13:50 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.378 16:13:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.378 16:13:51 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:31.378 16:13:51 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.378 16:13:51 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:31.378 16:13:51 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:31.378 16:13:51 -- setup/devices.sh@68 -- # return 0 00:04:31.378 16:13:51 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:31.378 16:13:51 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:31.378 16:13:51 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:31.378 16:13:51 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:31.378 16:13:51 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:31.378 /dev/nvme1n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:31.378 00:04:31.378 real 0m4.414s 00:04:31.378 user 0m0.920s 00:04:31.378 sys 0m1.195s 00:04:31.378 16:13:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:31.378 16:13:51 -- common/autotest_common.sh@10 -- # set +x 00:04:31.378 ************************************ 00:04:31.378 END TEST nvme_mount 00:04:31.378 ************************************ 00:04:31.378 16:13:51 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:31.378 16:13:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:31.378 16:13:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:31.378 16:13:51 -- common/autotest_common.sh@10 -- # set +x 00:04:31.378 ************************************ 00:04:31.378 START TEST dm_mount 00:04:31.378 ************************************ 00:04:31.378 16:13:51 -- common/autotest_common.sh@1114 -- # dm_mount 00:04:31.378 16:13:51 -- setup/devices.sh@144 -- # pv=nvme1n1 00:04:31.378 16:13:51 -- setup/devices.sh@145 -- # pv0=nvme1n1p1 00:04:31.378 16:13:51 -- setup/devices.sh@146 -- # pv1=nvme1n1p2 00:04:31.378 16:13:51 -- setup/devices.sh@148 -- # partition_drive nvme1n1 00:04:31.378 16:13:51 -- setup/common.sh@39 -- # local disk=nvme1n1 00:04:31.378 16:13:51 -- setup/common.sh@40 -- # local part_no=2 00:04:31.378 16:13:51 -- setup/common.sh@41 -- # local size=1073741824 00:04:31.378 16:13:51 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:31.378 16:13:51 -- setup/common.sh@44 -- # parts=() 00:04:31.378 16:13:51 -- setup/common.sh@44 -- # local parts 00:04:31.378 16:13:51 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:31.378 16:13:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.378 16:13:51 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.378 16:13:51 -- setup/common.sh@46 -- # (( part++ )) 00:04:31.378 16:13:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.378 16:13:51 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:31.378 16:13:51 -- setup/common.sh@46 -- # (( part++ )) 00:04:31.378 16:13:51 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:31.378 16:13:51 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:04:31.378 16:13:51 -- setup/common.sh@56 -- # sgdisk /dev/nvme1n1 --zap-all 00:04:31.378 16:13:51 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme1n1p1 nvme1n1p2 00:04:32.752 Creating new GPT entries in memory. 00:04:32.752 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:32.752 other utilities. 00:04:32.752 16:13:52 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:32.752 16:13:52 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:32.752 16:13:52 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:32.752 16:13:52 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:32.752 16:13:52 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=1:2048:264191 00:04:33.688 Creating new GPT entries in memory. 00:04:33.688 The operation has completed successfully. 00:04:33.688 16:13:53 -- setup/common.sh@57 -- # (( part++ )) 00:04:33.688 16:13:53 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:33.688 16:13:53 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:33.688 16:13:53 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:33.688 16:13:53 -- setup/common.sh@60 -- # flock /dev/nvme1n1 sgdisk /dev/nvme1n1 --new=2:264192:526335 00:04:34.622 The operation has completed successfully. 00:04:34.622 16:13:54 -- setup/common.sh@57 -- # (( part++ )) 00:04:34.622 16:13:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.622 16:13:54 -- setup/common.sh@62 -- # wait 54337 00:04:34.622 16:13:54 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:34.622 16:13:54 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.622 16:13:54 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:34.622 16:13:54 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:34.622 16:13:54 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:34.622 16:13:54 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:34.622 16:13:54 -- setup/devices.sh@161 -- # break 00:04:34.622 16:13:54 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:34.622 16:13:54 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:34.622 16:13:54 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:34.622 16:13:54 -- setup/devices.sh@166 -- # dm=dm-0 00:04:34.622 16:13:54 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme1n1p1/holders/dm-0 ]] 00:04:34.622 16:13:54 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme1n1p2/holders/dm-0 ]] 00:04:34.622 16:13:54 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.622 16:13:54 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:04:34.622 16:13:54 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.622 16:13:54 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:34.622 16:13:54 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:34.622 16:13:54 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.622 16:13:54 -- setup/devices.sh@174 -- # verify 0000:00:08.0 nvme1n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:34.622 16:13:54 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:34.622 16:13:54 -- setup/devices.sh@49 -- # local mounts=nvme1n1:nvme_dm_test 00:04:34.622 16:13:54 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:34.622 16:13:54 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:34.622 16:13:54 -- setup/devices.sh@53 -- # local found=0 00:04:34.622 16:13:54 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:34.622 16:13:54 -- setup/devices.sh@56 -- # : 00:04:34.622 16:13:54 -- setup/devices.sh@59 -- # local pci status 00:04:34.622 16:13:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.622 16:13:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:34.622 16:13:54 -- setup/devices.sh@47 -- # setup output config 00:04:34.622 16:13:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.622 16:13:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:34.622 16:13:54 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:34.622 16:13:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.881 16:13:54 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:34.881 16:13:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.138 16:13:54 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.138 16:13:54 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0,mount@nvme1n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\1\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:35.138 16:13:54 -- setup/devices.sh@63 -- # found=1 00:04:35.138 16:13:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.138 16:13:54 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.138 16:13:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.138 16:13:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.138 16:13:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.138 16:13:54 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.138 16:13:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.396 16:13:54 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.396 16:13:54 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:04:35.396 16:13:54 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.396 16:13:54 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:04:35.396 16:13:54 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:04:35.396 16:13:54 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.396 16:13:54 -- setup/devices.sh@184 -- # verify 0000:00:08.0 holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 '' '' 00:04:35.396 16:13:54 -- setup/devices.sh@48 -- # local dev=0000:00:08.0 00:04:35.396 16:13:54 -- setup/devices.sh@49 -- # local mounts=holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0 00:04:35.396 16:13:54 -- setup/devices.sh@50 -- # local mount_point= 00:04:35.396 16:13:54 -- setup/devices.sh@51 -- # local test_file= 00:04:35.396 16:13:54 -- setup/devices.sh@53 -- # local found=0 00:04:35.396 16:13:54 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:35.396 16:13:54 -- setup/devices.sh@59 -- # local pci status 00:04:35.396 16:13:54 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.396 16:13:54 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:08.0 00:04:35.396 16:13:54 -- setup/devices.sh@47 -- # setup output config 00:04:35.396 16:13:54 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.396 16:13:54 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:04:35.396 16:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.396 16:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.396 16:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.396 16:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.652 16:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:08.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.652 16:13:55 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme1n1p1:dm-0,holder@nvme1n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\1\n\1\p\2\:\d\m\-\0* ]] 00:04:35.652 16:13:55 -- setup/devices.sh@63 -- # found=1 00:04:35.652 16:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.652 16:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:09.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.652 16:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.910 16:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.910 16:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.910 16:13:55 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\8\.\0 ]] 00:04:35.910 16:13:55 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.910 16:13:55 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:35.910 16:13:55 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:35.910 16:13:55 -- setup/devices.sh@68 -- # return 0 00:04:35.910 16:13:55 -- setup/devices.sh@187 -- # cleanup_dm 00:04:35.910 16:13:55 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:35.910 16:13:55 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:35.910 16:13:55 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:35.910 16:13:55 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:35.910 16:13:55 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme1n1p1 00:04:35.910 /dev/nvme1n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:35.910 16:13:55 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:35.910 16:13:55 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme1n1p2 00:04:35.910 00:04:35.910 real 0m4.461s 00:04:35.910 user 0m0.596s 00:04:35.910 sys 0m0.794s 00:04:35.910 16:13:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:35.910 ************************************ 00:04:35.910 END TEST dm_mount 00:04:35.910 ************************************ 00:04:35.910 16:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:35.910 16:13:55 -- setup/devices.sh@1 -- # cleanup 00:04:35.910 16:13:55 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:35.910 16:13:55 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:04:35.910 16:13:55 -- setup/devices.sh@24 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:35.910 16:13:55 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme1n1p1 00:04:35.910 16:13:55 -- setup/devices.sh@27 -- # [[ -b /dev/nvme1n1 ]] 00:04:35.910 16:13:55 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme1n1 00:04:36.169 /dev/nvme1n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:04:36.169 /dev/nvme1n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:04:36.169 /dev/nvme1n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:36.169 /dev/nvme1n1: calling ioctl to re-read partition table: Success 00:04:36.169 16:13:55 -- setup/devices.sh@12 -- # cleanup_dm 00:04:36.169 16:13:55 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:04:36.169 16:13:55 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:36.169 16:13:55 -- setup/devices.sh@39 -- # [[ -b /dev/nvme1n1p1 ]] 00:04:36.169 16:13:55 -- setup/devices.sh@42 -- # [[ -b /dev/nvme1n1p2 ]] 00:04:36.169 16:13:55 -- setup/devices.sh@14 -- # [[ -b /dev/nvme1n1 ]] 00:04:36.169 16:13:55 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme1n1 00:04:36.169 00:04:36.169 real 0m10.533s 00:04:36.169 user 0m2.261s 00:04:36.169 sys 0m2.618s 00:04:36.169 16:13:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:36.169 16:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:36.169 ************************************ 00:04:36.169 END TEST devices 00:04:36.169 ************************************ 00:04:36.169 00:04:36.169 real 0m40.032s 00:04:36.169 user 0m7.904s 00:04:36.169 sys 0m10.660s 00:04:36.169 16:13:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:36.169 16:13:55 -- common/autotest_common.sh@10 -- # set +x 00:04:36.169 ************************************ 00:04:36.169 END TEST setup.sh 00:04:36.169 ************************************ 00:04:36.428 16:13:55 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:36.428 Hugepages 00:04:36.428 node hugesize free / total 00:04:36.428 node0 1048576kB 0 / 0 00:04:36.428 node0 2048kB 2048 / 2048 00:04:36.428 00:04:36.428 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:36.428 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:36.686 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:36.686 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:36.686 NVMe 0000:00:08.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:36.686 NVMe 0000:00:09.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:36.686 16:13:56 -- spdk/autotest.sh@128 -- # uname -s 00:04:36.686 16:13:56 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:04:36.686 16:13:56 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:04:36.686 16:13:56 -- common/autotest_common.sh@1526 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:37.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.623 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.623 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.623 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.623 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.623 16:13:57 -- common/autotest_common.sh@1527 -- # sleep 1 00:04:38.998 16:13:58 -- common/autotest_common.sh@1528 -- # bdfs=() 00:04:38.998 16:13:58 -- common/autotest_common.sh@1528 -- # local bdfs 00:04:38.998 16:13:58 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:04:38.998 16:13:58 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:04:38.998 16:13:58 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:38.998 16:13:58 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:38.998 16:13:58 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:38.998 16:13:58 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:38.998 16:13:58 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:38.998 16:13:58 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:38.998 16:13:58 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:38.998 16:13:58 -- common/autotest_common.sh@1531 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.256 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.256 Waiting for block devices as requested 00:04:39.256 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.256 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.514 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:04:39.514 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:04:44.793 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:04:44.793 16:14:04 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:44.793 16:14:04 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:04:44.793 16:14:04 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:44.793 16:14:04 -- common/autotest_common.sh@1497 -- # grep 0000:00:06.0/nvme/nvme 00:04:44.793 16:14:04 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:44.793 16:14:04 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 ]] 00:04:44.793 16:14:04 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme2 00:04:44.793 16:14:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme2 00:04:44.793 16:14:04 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme2 00:04:44.793 16:14:04 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme2 ]] 00:04:44.793 16:14:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:44.794 16:14:04 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:44.794 16:14:04 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme2 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:44.794 16:14:04 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1552 -- # continue 00:04:44.794 16:14:04 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:44.794 16:14:04 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:04:44.794 16:14:04 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:44.794 16:14:04 -- common/autotest_common.sh@1497 -- # grep 0000:00:07.0/nvme/nvme 00:04:44.794 16:14:04 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:44.794 16:14:04 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme3 00:04:44.794 16:14:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme3 00:04:44.794 16:14:04 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme3 00:04:44.794 16:14:04 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme3 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:44.794 16:14:04 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:44.794 16:14:04 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme3 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:44.794 16:14:04 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1552 -- # continue 00:04:44.794 16:14:04 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:44.794 16:14:04 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:08.0 00:04:44.794 16:14:04 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:44.794 16:14:04 -- common/autotest_common.sh@1497 -- # grep 0000:00:08.0/nvme/nvme 00:04:44.794 16:14:04 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:44.794 16:14:04 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:08.0/nvme/nvme1 00:04:44.794 16:14:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme1 00:04:44.794 16:14:04 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme1 00:04:44.794 16:14:04 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme1 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:44.794 16:14:04 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:44.794 16:14:04 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme1 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:44.794 16:14:04 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1552 -- # continue 00:04:44.794 16:14:04 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:04:44.794 16:14:04 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:00:09.0 00:04:44.794 16:14:04 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:44.794 16:14:04 -- common/autotest_common.sh@1497 -- # grep 0000:00:09.0/nvme/nvme 00:04:44.794 16:14:04 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:44.794 16:14:04 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:00/0000:00:09.0/nvme/nvme0 00:04:44.794 16:14:04 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:04:44.794 16:14:04 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:04:44.794 16:14:04 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # grep oacs 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:44.794 16:14:04 -- common/autotest_common.sh@1540 -- # oacs=' 0x12a' 00:04:44.794 16:14:04 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:04:44.794 16:14:04 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:04:44.794 16:14:04 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:04:44.794 16:14:04 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:04:44.794 16:14:04 -- common/autotest_common.sh@1552 -- # continue 00:04:44.794 16:14:04 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:04:44.794 16:14:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:44.794 16:14:04 -- common/autotest_common.sh@10 -- # set +x 00:04:44.794 16:14:04 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:04:44.794 16:14:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:44.794 16:14:04 -- common/autotest_common.sh@10 -- # set +x 00:04:44.794 16:14:04 -- spdk/autotest.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:45.361 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.621 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.621 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.621 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.621 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:04:45.621 16:14:05 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:04:45.621 16:14:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:45.621 16:14:05 -- common/autotest_common.sh@10 -- # set +x 00:04:45.621 16:14:05 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:04:45.621 16:14:05 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:04:45.621 16:14:05 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:04:45.621 16:14:05 -- common/autotest_common.sh@1572 -- # bdfs=() 00:04:45.621 16:14:05 -- common/autotest_common.sh@1572 -- # local bdfs 00:04:45.621 16:14:05 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:04:45.621 16:14:05 -- common/autotest_common.sh@1508 -- # bdfs=() 00:04:45.621 16:14:05 -- common/autotest_common.sh@1508 -- # local bdfs 00:04:45.621 16:14:05 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.621 16:14:05 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:45.621 16:14:05 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:04:45.621 16:14:05 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:04:45.621 16:14:05 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:04:45.621 16:14:05 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:45.621 16:14:05 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:04:45.621 16:14:05 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:45.621 16:14:05 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.881 16:14:05 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:45.881 16:14:05 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:04:45.881 16:14:05 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:45.881 16:14:05 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.881 16:14:05 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:45.881 16:14:05 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:08.0/device 00:04:45.881 16:14:05 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:45.881 16:14:05 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.881 16:14:05 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:04:45.881 16:14:05 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:00:09.0/device 00:04:45.881 16:14:05 -- common/autotest_common.sh@1575 -- # device=0x0010 00:04:45.881 16:14:05 -- common/autotest_common.sh@1576 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:45.881 16:14:05 -- common/autotest_common.sh@1581 -- # printf '%s\n' 00:04:45.881 16:14:05 -- common/autotest_common.sh@1587 -- # [[ -z '' ]] 00:04:45.881 16:14:05 -- common/autotest_common.sh@1588 -- # return 0 00:04:45.881 16:14:05 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:04:45.881 16:14:05 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:04:45.881 16:14:05 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:45.881 16:14:05 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:04:45.881 16:14:05 -- spdk/autotest.sh@160 -- # timing_enter lib 00:04:45.881 16:14:05 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:45.881 16:14:05 -- common/autotest_common.sh@10 -- # set +x 00:04:45.881 16:14:05 -- spdk/autotest.sh@162 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:45.881 16:14:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.881 16:14:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.881 16:14:05 -- common/autotest_common.sh@10 -- # set +x 00:04:45.881 ************************************ 00:04:45.881 START TEST env 00:04:45.881 ************************************ 00:04:45.881 16:14:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:45.881 * Looking for test storage... 00:04:45.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:45.881 16:14:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:45.881 16:14:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:45.881 16:14:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:45.881 16:14:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:45.881 16:14:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:45.881 16:14:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:45.881 16:14:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:45.881 16:14:05 -- scripts/common.sh@335 -- # IFS=.-: 00:04:45.881 16:14:05 -- scripts/common.sh@335 -- # read -ra ver1 00:04:45.881 16:14:05 -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.881 16:14:05 -- scripts/common.sh@336 -- # read -ra ver2 00:04:45.881 16:14:05 -- scripts/common.sh@337 -- # local 'op=<' 00:04:45.881 16:14:05 -- scripts/common.sh@339 -- # ver1_l=2 00:04:45.881 16:14:05 -- scripts/common.sh@340 -- # ver2_l=1 00:04:45.881 16:14:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:45.881 16:14:05 -- scripts/common.sh@343 -- # case "$op" in 00:04:45.881 16:14:05 -- scripts/common.sh@344 -- # : 1 00:04:45.881 16:14:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:45.881 16:14:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.881 16:14:05 -- scripts/common.sh@364 -- # decimal 1 00:04:45.881 16:14:05 -- scripts/common.sh@352 -- # local d=1 00:04:45.881 16:14:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.881 16:14:05 -- scripts/common.sh@354 -- # echo 1 00:04:45.881 16:14:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:45.881 16:14:05 -- scripts/common.sh@365 -- # decimal 2 00:04:45.881 16:14:05 -- scripts/common.sh@352 -- # local d=2 00:04:45.881 16:14:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.881 16:14:05 -- scripts/common.sh@354 -- # echo 2 00:04:45.881 16:14:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:45.881 16:14:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:45.881 16:14:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:45.881 16:14:05 -- scripts/common.sh@367 -- # return 0 00:04:45.881 16:14:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.881 16:14:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:45.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.881 --rc genhtml_branch_coverage=1 00:04:45.881 --rc genhtml_function_coverage=1 00:04:45.881 --rc genhtml_legend=1 00:04:45.881 --rc geninfo_all_blocks=1 00:04:45.881 --rc geninfo_unexecuted_blocks=1 00:04:45.881 00:04:45.881 ' 00:04:45.881 16:14:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:45.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.881 --rc genhtml_branch_coverage=1 00:04:45.881 --rc genhtml_function_coverage=1 00:04:45.881 --rc genhtml_legend=1 00:04:45.881 --rc geninfo_all_blocks=1 00:04:45.881 --rc geninfo_unexecuted_blocks=1 00:04:45.881 00:04:45.881 ' 00:04:45.881 16:14:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:45.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.881 --rc genhtml_branch_coverage=1 00:04:45.881 --rc genhtml_function_coverage=1 00:04:45.881 --rc genhtml_legend=1 00:04:45.881 --rc geninfo_all_blocks=1 00:04:45.881 --rc geninfo_unexecuted_blocks=1 00:04:45.881 00:04:45.881 ' 00:04:45.881 16:14:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:45.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.881 --rc genhtml_branch_coverage=1 00:04:45.881 --rc genhtml_function_coverage=1 00:04:45.881 --rc genhtml_legend=1 00:04:45.881 --rc geninfo_all_blocks=1 00:04:45.881 --rc geninfo_unexecuted_blocks=1 00:04:45.881 00:04:45.881 ' 00:04:45.881 16:14:05 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:45.881 16:14:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:45.881 16:14:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:45.881 16:14:05 -- common/autotest_common.sh@10 -- # set +x 00:04:45.881 ************************************ 00:04:45.881 START TEST env_memory 00:04:45.881 ************************************ 00:04:45.881 16:14:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:45.881 00:04:45.881 00:04:45.881 CUnit - A unit testing framework for C - Version 2.1-3 00:04:45.881 http://cunit.sourceforge.net/ 00:04:45.881 00:04:45.881 00:04:45.881 Suite: memory 00:04:45.881 Test: alloc and free memory map ...[2024-11-09 16:14:05.612511] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:45.881 passed 00:04:46.178 Test: mem map translation ...[2024-11-09 16:14:05.651329] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:46.178 [2024-11-09 16:14:05.651438] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:46.178 [2024-11-09 16:14:05.651542] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:46.178 [2024-11-09 16:14:05.651608] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:46.178 passed 00:04:46.178 Test: mem map registration ...[2024-11-09 16:14:05.719856] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:46.178 [2024-11-09 16:14:05.719960] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:46.178 passed 00:04:46.178 Test: mem map adjacent registrations ...passed 00:04:46.178 00:04:46.178 Run Summary: Type Total Ran Passed Failed Inactive 00:04:46.178 suites 1 1 n/a 0 0 00:04:46.178 tests 4 4 4 0 0 00:04:46.178 asserts 152 152 152 0 n/a 00:04:46.178 00:04:46.178 Elapsed time = 0.233 seconds 00:04:46.178 00:04:46.178 real 0m0.268s 00:04:46.178 user 0m0.242s 00:04:46.178 sys 0m0.018s 00:04:46.178 16:14:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:46.178 16:14:05 -- common/autotest_common.sh@10 -- # set +x 00:04:46.178 ************************************ 00:04:46.178 END TEST env_memory 00:04:46.178 ************************************ 00:04:46.178 16:14:05 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.178 16:14:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:46.178 16:14:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:46.178 16:14:05 -- common/autotest_common.sh@10 -- # set +x 00:04:46.178 ************************************ 00:04:46.178 START TEST env_vtophys 00:04:46.178 ************************************ 00:04:46.178 16:14:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:46.178 EAL: lib.eal log level changed from notice to debug 00:04:46.178 EAL: Detected lcore 0 as core 0 on socket 0 00:04:46.178 EAL: Detected lcore 1 as core 0 on socket 0 00:04:46.178 EAL: Detected lcore 2 as core 0 on socket 0 00:04:46.178 EAL: Detected lcore 3 as core 0 on socket 0 00:04:46.178 EAL: Detected lcore 4 as core 0 on socket 0 00:04:46.178 EAL: Detected lcore 5 as core 0 on socket 0 00:04:46.178 EAL: Detected lcore 6 as core 0 on socket 0 00:04:46.178 EAL: Detected lcore 7 as core 0 on socket 0 00:04:46.178 EAL: Detected lcore 8 as core 0 on socket 0 00:04:46.178 EAL: Detected lcore 9 as core 0 on socket 0 00:04:46.178 EAL: Maximum logical cores by configuration: 128 00:04:46.178 EAL: Detected CPU lcores: 10 00:04:46.178 EAL: Detected NUMA nodes: 1 00:04:46.178 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:46.178 EAL: Detected shared linkage of DPDK 00:04:46.178 EAL: No shared files mode enabled, IPC will be disabled 00:04:46.458 EAL: Selected IOVA mode 'PA' 00:04:46.458 EAL: Probing VFIO support... 00:04:46.458 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:46.458 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:46.458 EAL: Ask a virtual area of 0x2e000 bytes 00:04:46.458 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:46.458 EAL: Setting up physically contiguous memory... 00:04:46.458 EAL: Setting maximum number of open files to 524288 00:04:46.458 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:46.458 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:46.458 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.458 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:46.458 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.458 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.458 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:46.458 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:46.458 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.458 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:46.458 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.458 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.458 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:46.458 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:46.458 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.458 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:46.458 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.458 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.458 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:46.458 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:46.458 EAL: Ask a virtual area of 0x61000 bytes 00:04:46.458 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:46.458 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:46.458 EAL: Ask a virtual area of 0x400000000 bytes 00:04:46.458 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:46.458 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:46.458 EAL: Hugepages will be freed exactly as allocated. 00:04:46.458 EAL: No shared files mode enabled, IPC is disabled 00:04:46.458 EAL: No shared files mode enabled, IPC is disabled 00:04:46.458 EAL: TSC frequency is ~2600000 KHz 00:04:46.458 EAL: Main lcore 0 is ready (tid=7fd510fd4a40;cpuset=[0]) 00:04:46.458 EAL: Trying to obtain current memory policy. 00:04:46.458 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.458 EAL: Restoring previous memory policy: 0 00:04:46.458 EAL: request: mp_malloc_sync 00:04:46.458 EAL: No shared files mode enabled, IPC is disabled 00:04:46.458 EAL: Heap on socket 0 was expanded by 2MB 00:04:46.458 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:46.458 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:46.458 EAL: Mem event callback 'spdk:(nil)' registered 00:04:46.458 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:46.458 00:04:46.458 00:04:46.458 CUnit - A unit testing framework for C - Version 2.1-3 00:04:46.458 http://cunit.sourceforge.net/ 00:04:46.458 00:04:46.458 00:04:46.458 Suite: components_suite 00:04:46.717 Test: vtophys_malloc_test ...passed 00:04:46.717 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:46.717 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.717 EAL: Restoring previous memory policy: 4 00:04:46.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.717 EAL: request: mp_malloc_sync 00:04:46.717 EAL: No shared files mode enabled, IPC is disabled 00:04:46.717 EAL: Heap on socket 0 was expanded by 4MB 00:04:46.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.717 EAL: request: mp_malloc_sync 00:04:46.717 EAL: No shared files mode enabled, IPC is disabled 00:04:46.717 EAL: Heap on socket 0 was shrunk by 4MB 00:04:46.717 EAL: Trying to obtain current memory policy. 00:04:46.717 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.717 EAL: Restoring previous memory policy: 4 00:04:46.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.717 EAL: request: mp_malloc_sync 00:04:46.717 EAL: No shared files mode enabled, IPC is disabled 00:04:46.717 EAL: Heap on socket 0 was expanded by 6MB 00:04:46.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.717 EAL: request: mp_malloc_sync 00:04:46.717 EAL: No shared files mode enabled, IPC is disabled 00:04:46.717 EAL: Heap on socket 0 was shrunk by 6MB 00:04:46.717 EAL: Trying to obtain current memory policy. 00:04:46.717 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.717 EAL: Restoring previous memory policy: 4 00:04:46.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.717 EAL: request: mp_malloc_sync 00:04:46.717 EAL: No shared files mode enabled, IPC is disabled 00:04:46.717 EAL: Heap on socket 0 was expanded by 10MB 00:04:46.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.717 EAL: request: mp_malloc_sync 00:04:46.717 EAL: No shared files mode enabled, IPC is disabled 00:04:46.717 EAL: Heap on socket 0 was shrunk by 10MB 00:04:46.717 EAL: Trying to obtain current memory policy. 00:04:46.717 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.717 EAL: Restoring previous memory policy: 4 00:04:46.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.717 EAL: request: mp_malloc_sync 00:04:46.717 EAL: No shared files mode enabled, IPC is disabled 00:04:46.717 EAL: Heap on socket 0 was expanded by 18MB 00:04:46.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.717 EAL: request: mp_malloc_sync 00:04:46.717 EAL: No shared files mode enabled, IPC is disabled 00:04:46.717 EAL: Heap on socket 0 was shrunk by 18MB 00:04:46.717 EAL: Trying to obtain current memory policy. 00:04:46.717 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.717 EAL: Restoring previous memory policy: 4 00:04:46.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.717 EAL: request: mp_malloc_sync 00:04:46.717 EAL: No shared files mode enabled, IPC is disabled 00:04:46.717 EAL: Heap on socket 0 was expanded by 34MB 00:04:46.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.717 EAL: request: mp_malloc_sync 00:04:46.717 EAL: No shared files mode enabled, IPC is disabled 00:04:46.717 EAL: Heap on socket 0 was shrunk by 34MB 00:04:46.717 EAL: Trying to obtain current memory policy. 00:04:46.717 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.976 EAL: Restoring previous memory policy: 4 00:04:46.976 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.976 EAL: request: mp_malloc_sync 00:04:46.976 EAL: No shared files mode enabled, IPC is disabled 00:04:46.976 EAL: Heap on socket 0 was expanded by 66MB 00:04:46.976 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.976 EAL: request: mp_malloc_sync 00:04:46.976 EAL: No shared files mode enabled, IPC is disabled 00:04:46.976 EAL: Heap on socket 0 was shrunk by 66MB 00:04:46.976 EAL: Trying to obtain current memory policy. 00:04:46.976 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:46.976 EAL: Restoring previous memory policy: 4 00:04:46.976 EAL: Calling mem event callback 'spdk:(nil)' 00:04:46.976 EAL: request: mp_malloc_sync 00:04:46.976 EAL: No shared files mode enabled, IPC is disabled 00:04:46.976 EAL: Heap on socket 0 was expanded by 130MB 00:04:47.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.236 EAL: request: mp_malloc_sync 00:04:47.236 EAL: No shared files mode enabled, IPC is disabled 00:04:47.236 EAL: Heap on socket 0 was shrunk by 130MB 00:04:47.236 EAL: Trying to obtain current memory policy. 00:04:47.236 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.236 EAL: Restoring previous memory policy: 4 00:04:47.236 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.236 EAL: request: mp_malloc_sync 00:04:47.236 EAL: No shared files mode enabled, IPC is disabled 00:04:47.236 EAL: Heap on socket 0 was expanded by 258MB 00:04:47.498 EAL: Calling mem event callback 'spdk:(nil)' 00:04:47.757 EAL: request: mp_malloc_sync 00:04:47.757 EAL: No shared files mode enabled, IPC is disabled 00:04:47.758 EAL: Heap on socket 0 was shrunk by 258MB 00:04:47.758 EAL: Trying to obtain current memory policy. 00:04:47.758 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.015 EAL: Restoring previous memory policy: 4 00:04:48.015 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.015 EAL: request: mp_malloc_sync 00:04:48.015 EAL: No shared files mode enabled, IPC is disabled 00:04:48.015 EAL: Heap on socket 0 was expanded by 514MB 00:04:48.584 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.584 EAL: request: mp_malloc_sync 00:04:48.584 EAL: No shared files mode enabled, IPC is disabled 00:04:48.584 EAL: Heap on socket 0 was shrunk by 514MB 00:04:49.151 EAL: Trying to obtain current memory policy. 00:04:49.151 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.151 EAL: Restoring previous memory policy: 4 00:04:49.151 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.151 EAL: request: mp_malloc_sync 00:04:49.151 EAL: No shared files mode enabled, IPC is disabled 00:04:49.151 EAL: Heap on socket 0 was expanded by 1026MB 00:04:50.089 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.089 EAL: request: mp_malloc_sync 00:04:50.089 EAL: No shared files mode enabled, IPC is disabled 00:04:50.089 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:51.033 passed 00:04:51.033 00:04:51.033 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.033 suites 1 1 n/a 0 0 00:04:51.033 tests 2 2 2 0 0 00:04:51.033 asserts 5418 5418 5418 0 n/a 00:04:51.033 00:04:51.033 Elapsed time = 4.512 seconds 00:04:51.033 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.033 EAL: request: mp_malloc_sync 00:04:51.033 EAL: No shared files mode enabled, IPC is disabled 00:04:51.033 EAL: Heap on socket 0 was shrunk by 2MB 00:04:51.033 EAL: No shared files mode enabled, IPC is disabled 00:04:51.033 EAL: No shared files mode enabled, IPC is disabled 00:04:51.033 EAL: No shared files mode enabled, IPC is disabled 00:04:51.033 00:04:51.033 real 0m4.762s 00:04:51.033 user 0m4.013s 00:04:51.033 sys 0m0.604s 00:04:51.033 ************************************ 00:04:51.033 END TEST env_vtophys 00:04:51.033 ************************************ 00:04:51.033 16:14:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.033 16:14:10 -- common/autotest_common.sh@10 -- # set +x 00:04:51.033 16:14:10 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:51.033 16:14:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.033 16:14:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.033 16:14:10 -- common/autotest_common.sh@10 -- # set +x 00:04:51.033 ************************************ 00:04:51.033 START TEST env_pci 00:04:51.033 ************************************ 00:04:51.033 16:14:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:51.033 00:04:51.033 00:04:51.033 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.033 http://cunit.sourceforge.net/ 00:04:51.033 00:04:51.033 00:04:51.033 Suite: pci 00:04:51.033 Test: pci_hook ...[2024-11-09 16:14:10.717198] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56033 has claimed it 00:04:51.033 EAL: Cannot find device (10000:00:01.0) 00:04:51.033 passed 00:04:51.033 00:04:51.033 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.033 suites 1 1 n/a 0 0 00:04:51.033 tests 1 1 1 0 0 00:04:51.033 asserts 25 25 25 0 n/a 00:04:51.033 00:04:51.033 Elapsed time = 0.006 seconds 00:04:51.033 EAL: Failed to attach device on primary process 00:04:51.033 00:04:51.033 real 0m0.066s 00:04:51.033 user 0m0.033s 00:04:51.033 sys 0m0.032s 00:04:51.033 16:14:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.033 16:14:10 -- common/autotest_common.sh@10 -- # set +x 00:04:51.033 ************************************ 00:04:51.033 END TEST env_pci 00:04:51.033 ************************************ 00:04:51.033 16:14:10 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:51.033 16:14:10 -- env/env.sh@15 -- # uname 00:04:51.033 16:14:10 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:51.033 16:14:10 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:51.033 16:14:10 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:51.033 16:14:10 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:51.033 16:14:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.033 16:14:10 -- common/autotest_common.sh@10 -- # set +x 00:04:51.292 ************************************ 00:04:51.292 START TEST env_dpdk_post_init 00:04:51.292 ************************************ 00:04:51.292 16:14:10 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:51.292 EAL: Detected CPU lcores: 10 00:04:51.292 EAL: Detected NUMA nodes: 1 00:04:51.292 EAL: Detected shared linkage of DPDK 00:04:51.292 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.292 EAL: Selected IOVA mode 'PA' 00:04:51.292 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.292 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:04:51.292 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:04:51.292 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:08.0 (socket -1) 00:04:51.292 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:09.0 (socket -1) 00:04:51.292 Starting DPDK initialization... 00:04:51.292 Starting SPDK post initialization... 00:04:51.292 SPDK NVMe probe 00:04:51.292 Attaching to 0000:00:06.0 00:04:51.292 Attaching to 0000:00:07.0 00:04:51.292 Attaching to 0000:00:08.0 00:04:51.292 Attaching to 0000:00:09.0 00:04:51.292 Attached to 0000:00:06.0 00:04:51.292 Attached to 0000:00:07.0 00:04:51.292 Attached to 0000:00:09.0 00:04:51.292 Attached to 0000:00:08.0 00:04:51.292 Cleaning up... 00:04:51.292 00:04:51.292 real 0m0.224s 00:04:51.292 user 0m0.067s 00:04:51.292 sys 0m0.059s 00:04:51.292 ************************************ 00:04:51.292 END TEST env_dpdk_post_init 00:04:51.292 ************************************ 00:04:51.292 16:14:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.292 16:14:11 -- common/autotest_common.sh@10 -- # set +x 00:04:51.550 16:14:11 -- env/env.sh@26 -- # uname 00:04:51.550 16:14:11 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:51.550 16:14:11 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.551 16:14:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.551 16:14:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.551 16:14:11 -- common/autotest_common.sh@10 -- # set +x 00:04:51.551 ************************************ 00:04:51.551 START TEST env_mem_callbacks 00:04:51.551 ************************************ 00:04:51.551 16:14:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:51.551 EAL: Detected CPU lcores: 10 00:04:51.551 EAL: Detected NUMA nodes: 1 00:04:51.551 EAL: Detected shared linkage of DPDK 00:04:51.551 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:51.551 EAL: Selected IOVA mode 'PA' 00:04:51.551 00:04:51.551 00:04:51.551 CUnit - A unit testing framework for C - Version 2.1-3 00:04:51.551 http://cunit.sourceforge.net/ 00:04:51.551 00:04:51.551 00:04:51.551 Suite: memory 00:04:51.551 Test: test ... 00:04:51.551 register 0x200000200000 2097152 00:04:51.551 malloc 3145728 00:04:51.551 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:51.551 register 0x200000400000 4194304 00:04:51.551 buf 0x2000004fffc0 len 3145728 PASSED 00:04:51.551 malloc 64 00:04:51.551 buf 0x2000004ffec0 len 64 PASSED 00:04:51.551 malloc 4194304 00:04:51.551 register 0x200000800000 6291456 00:04:51.551 buf 0x2000009fffc0 len 4194304 PASSED 00:04:51.551 free 0x2000004fffc0 3145728 00:04:51.551 free 0x2000004ffec0 64 00:04:51.551 unregister 0x200000400000 4194304 PASSED 00:04:51.551 free 0x2000009fffc0 4194304 00:04:51.551 unregister 0x200000800000 6291456 PASSED 00:04:51.551 malloc 8388608 00:04:51.551 register 0x200000400000 10485760 00:04:51.551 buf 0x2000005fffc0 len 8388608 PASSED 00:04:51.551 free 0x2000005fffc0 8388608 00:04:51.551 unregister 0x200000400000 10485760 PASSED 00:04:51.551 passed 00:04:51.551 00:04:51.551 Run Summary: Type Total Ran Passed Failed Inactive 00:04:51.551 suites 1 1 n/a 0 0 00:04:51.551 tests 1 1 1 0 0 00:04:51.551 asserts 15 15 15 0 n/a 00:04:51.551 00:04:51.551 Elapsed time = 0.040 seconds 00:04:51.551 00:04:51.551 real 0m0.199s 00:04:51.551 user 0m0.060s 00:04:51.551 sys 0m0.038s 00:04:51.551 16:14:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.551 16:14:11 -- common/autotest_common.sh@10 -- # set +x 00:04:51.551 ************************************ 00:04:51.551 END TEST env_mem_callbacks 00:04:51.551 ************************************ 00:04:51.551 00:04:51.551 real 0m5.892s 00:04:51.551 user 0m4.557s 00:04:51.551 sys 0m0.947s 00:04:51.551 16:14:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:51.551 16:14:11 -- common/autotest_common.sh@10 -- # set +x 00:04:51.551 ************************************ 00:04:51.551 END TEST env 00:04:51.551 ************************************ 00:04:51.809 16:14:11 -- spdk/autotest.sh@163 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:51.809 16:14:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:51.809 16:14:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:51.809 16:14:11 -- common/autotest_common.sh@10 -- # set +x 00:04:51.809 ************************************ 00:04:51.809 START TEST rpc 00:04:51.809 ************************************ 00:04:51.809 16:14:11 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:51.809 * Looking for test storage... 00:04:51.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:51.809 16:14:11 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:51.809 16:14:11 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:51.809 16:14:11 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:51.809 16:14:11 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:51.809 16:14:11 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:51.809 16:14:11 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:51.809 16:14:11 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:51.809 16:14:11 -- scripts/common.sh@335 -- # IFS=.-: 00:04:51.809 16:14:11 -- scripts/common.sh@335 -- # read -ra ver1 00:04:51.809 16:14:11 -- scripts/common.sh@336 -- # IFS=.-: 00:04:51.809 16:14:11 -- scripts/common.sh@336 -- # read -ra ver2 00:04:51.809 16:14:11 -- scripts/common.sh@337 -- # local 'op=<' 00:04:51.809 16:14:11 -- scripts/common.sh@339 -- # ver1_l=2 00:04:51.809 16:14:11 -- scripts/common.sh@340 -- # ver2_l=1 00:04:51.809 16:14:11 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:51.809 16:14:11 -- scripts/common.sh@343 -- # case "$op" in 00:04:51.809 16:14:11 -- scripts/common.sh@344 -- # : 1 00:04:51.809 16:14:11 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:51.809 16:14:11 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:51.809 16:14:11 -- scripts/common.sh@364 -- # decimal 1 00:04:51.809 16:14:11 -- scripts/common.sh@352 -- # local d=1 00:04:51.809 16:14:11 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:51.809 16:14:11 -- scripts/common.sh@354 -- # echo 1 00:04:51.809 16:14:11 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:51.809 16:14:11 -- scripts/common.sh@365 -- # decimal 2 00:04:51.809 16:14:11 -- scripts/common.sh@352 -- # local d=2 00:04:51.809 16:14:11 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:51.809 16:14:11 -- scripts/common.sh@354 -- # echo 2 00:04:51.809 16:14:11 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:51.809 16:14:11 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:51.809 16:14:11 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:51.809 16:14:11 -- scripts/common.sh@367 -- # return 0 00:04:51.809 16:14:11 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:51.809 16:14:11 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:51.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.809 --rc genhtml_branch_coverage=1 00:04:51.809 --rc genhtml_function_coverage=1 00:04:51.809 --rc genhtml_legend=1 00:04:51.809 --rc geninfo_all_blocks=1 00:04:51.809 --rc geninfo_unexecuted_blocks=1 00:04:51.809 00:04:51.809 ' 00:04:51.809 16:14:11 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:51.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.809 --rc genhtml_branch_coverage=1 00:04:51.809 --rc genhtml_function_coverage=1 00:04:51.809 --rc genhtml_legend=1 00:04:51.809 --rc geninfo_all_blocks=1 00:04:51.809 --rc geninfo_unexecuted_blocks=1 00:04:51.809 00:04:51.809 ' 00:04:51.809 16:14:11 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:51.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.809 --rc genhtml_branch_coverage=1 00:04:51.809 --rc genhtml_function_coverage=1 00:04:51.809 --rc genhtml_legend=1 00:04:51.809 --rc geninfo_all_blocks=1 00:04:51.809 --rc geninfo_unexecuted_blocks=1 00:04:51.809 00:04:51.809 ' 00:04:51.809 16:14:11 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:51.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:51.809 --rc genhtml_branch_coverage=1 00:04:51.809 --rc genhtml_function_coverage=1 00:04:51.809 --rc genhtml_legend=1 00:04:51.809 --rc geninfo_all_blocks=1 00:04:51.809 --rc geninfo_unexecuted_blocks=1 00:04:51.809 00:04:51.809 ' 00:04:51.809 16:14:11 -- rpc/rpc.sh@65 -- # spdk_pid=56159 00:04:51.809 16:14:11 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.810 16:14:11 -- rpc/rpc.sh@67 -- # waitforlisten 56159 00:04:51.810 16:14:11 -- common/autotest_common.sh@829 -- # '[' -z 56159 ']' 00:04:51.810 16:14:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:51.810 16:14:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:51.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:51.810 16:14:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:51.810 16:14:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:51.810 16:14:11 -- common/autotest_common.sh@10 -- # set +x 00:04:51.810 16:14:11 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:51.810 [2024-11-09 16:14:11.540728] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:51.810 [2024-11-09 16:14:11.540837] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56159 ] 00:04:52.068 [2024-11-09 16:14:11.685779] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.068 [2024-11-09 16:14:11.837006] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:52.068 [2024-11-09 16:14:11.837157] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:52.068 [2024-11-09 16:14:11.837169] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56159' to capture a snapshot of events at runtime. 00:04:52.068 [2024-11-09 16:14:11.837176] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56159 for offline analysis/debug. 00:04:52.068 [2024-11-09 16:14:11.837198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.635 16:14:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:52.636 16:14:12 -- common/autotest_common.sh@862 -- # return 0 00:04:52.636 16:14:12 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:52.636 16:14:12 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:52.636 16:14:12 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:52.636 16:14:12 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:52.636 16:14:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.636 16:14:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.636 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.636 ************************************ 00:04:52.636 START TEST rpc_integrity 00:04:52.636 ************************************ 00:04:52.636 16:14:12 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:52.636 16:14:12 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:52.636 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.636 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.636 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.636 16:14:12 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:52.636 16:14:12 -- rpc/rpc.sh@13 -- # jq length 00:04:52.636 16:14:12 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:52.636 16:14:12 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:52.636 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.636 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.636 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.636 16:14:12 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:52.636 16:14:12 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:52.636 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.636 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.636 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.636 16:14:12 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:52.636 { 00:04:52.636 "name": "Malloc0", 00:04:52.636 "aliases": [ 00:04:52.636 "d9d71212-c12f-4ba8-a408-9b76a30aa456" 00:04:52.636 ], 00:04:52.636 "product_name": "Malloc disk", 00:04:52.636 "block_size": 512, 00:04:52.636 "num_blocks": 16384, 00:04:52.636 "uuid": "d9d71212-c12f-4ba8-a408-9b76a30aa456", 00:04:52.636 "assigned_rate_limits": { 00:04:52.636 "rw_ios_per_sec": 0, 00:04:52.636 "rw_mbytes_per_sec": 0, 00:04:52.636 "r_mbytes_per_sec": 0, 00:04:52.636 "w_mbytes_per_sec": 0 00:04:52.636 }, 00:04:52.636 "claimed": false, 00:04:52.636 "zoned": false, 00:04:52.636 "supported_io_types": { 00:04:52.636 "read": true, 00:04:52.636 "write": true, 00:04:52.636 "unmap": true, 00:04:52.636 "write_zeroes": true, 00:04:52.636 "flush": true, 00:04:52.636 "reset": true, 00:04:52.636 "compare": false, 00:04:52.636 "compare_and_write": false, 00:04:52.636 "abort": true, 00:04:52.636 "nvme_admin": false, 00:04:52.636 "nvme_io": false 00:04:52.636 }, 00:04:52.636 "memory_domains": [ 00:04:52.636 { 00:04:52.636 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.636 "dma_device_type": 2 00:04:52.636 } 00:04:52.636 ], 00:04:52.636 "driver_specific": {} 00:04:52.636 } 00:04:52.636 ]' 00:04:52.636 16:14:12 -- rpc/rpc.sh@17 -- # jq length 00:04:52.895 16:14:12 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:52.895 16:14:12 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:52.895 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.895 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.895 [2024-11-09 16:14:12.424497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:52.895 [2024-11-09 16:14:12.424543] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:52.895 [2024-11-09 16:14:12.424560] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:04:52.895 [2024-11-09 16:14:12.424568] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:52.895 [2024-11-09 16:14:12.426258] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:52.895 [2024-11-09 16:14:12.426287] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:52.895 Passthru0 00:04:52.895 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.895 16:14:12 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:52.895 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.895 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.895 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.895 16:14:12 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:52.895 { 00:04:52.895 "name": "Malloc0", 00:04:52.895 "aliases": [ 00:04:52.895 "d9d71212-c12f-4ba8-a408-9b76a30aa456" 00:04:52.895 ], 00:04:52.895 "product_name": "Malloc disk", 00:04:52.895 "block_size": 512, 00:04:52.895 "num_blocks": 16384, 00:04:52.895 "uuid": "d9d71212-c12f-4ba8-a408-9b76a30aa456", 00:04:52.895 "assigned_rate_limits": { 00:04:52.895 "rw_ios_per_sec": 0, 00:04:52.895 "rw_mbytes_per_sec": 0, 00:04:52.895 "r_mbytes_per_sec": 0, 00:04:52.895 "w_mbytes_per_sec": 0 00:04:52.895 }, 00:04:52.895 "claimed": true, 00:04:52.895 "claim_type": "exclusive_write", 00:04:52.895 "zoned": false, 00:04:52.895 "supported_io_types": { 00:04:52.895 "read": true, 00:04:52.895 "write": true, 00:04:52.895 "unmap": true, 00:04:52.895 "write_zeroes": true, 00:04:52.895 "flush": true, 00:04:52.895 "reset": true, 00:04:52.895 "compare": false, 00:04:52.895 "compare_and_write": false, 00:04:52.895 "abort": true, 00:04:52.895 "nvme_admin": false, 00:04:52.895 "nvme_io": false 00:04:52.895 }, 00:04:52.895 "memory_domains": [ 00:04:52.895 { 00:04:52.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.895 "dma_device_type": 2 00:04:52.895 } 00:04:52.895 ], 00:04:52.895 "driver_specific": {} 00:04:52.895 }, 00:04:52.895 { 00:04:52.895 "name": "Passthru0", 00:04:52.895 "aliases": [ 00:04:52.895 "9712be07-fe52-5ee1-af57-63b53b089618" 00:04:52.895 ], 00:04:52.895 "product_name": "passthru", 00:04:52.895 "block_size": 512, 00:04:52.895 "num_blocks": 16384, 00:04:52.895 "uuid": "9712be07-fe52-5ee1-af57-63b53b089618", 00:04:52.895 "assigned_rate_limits": { 00:04:52.895 "rw_ios_per_sec": 0, 00:04:52.895 "rw_mbytes_per_sec": 0, 00:04:52.895 "r_mbytes_per_sec": 0, 00:04:52.895 "w_mbytes_per_sec": 0 00:04:52.895 }, 00:04:52.895 "claimed": false, 00:04:52.895 "zoned": false, 00:04:52.895 "supported_io_types": { 00:04:52.895 "read": true, 00:04:52.895 "write": true, 00:04:52.896 "unmap": true, 00:04:52.896 "write_zeroes": true, 00:04:52.896 "flush": true, 00:04:52.896 "reset": true, 00:04:52.896 "compare": false, 00:04:52.896 "compare_and_write": false, 00:04:52.896 "abort": true, 00:04:52.896 "nvme_admin": false, 00:04:52.896 "nvme_io": false 00:04:52.896 }, 00:04:52.896 "memory_domains": [ 00:04:52.896 { 00:04:52.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.896 "dma_device_type": 2 00:04:52.896 } 00:04:52.896 ], 00:04:52.896 "driver_specific": { 00:04:52.896 "passthru": { 00:04:52.896 "name": "Passthru0", 00:04:52.896 "base_bdev_name": "Malloc0" 00:04:52.896 } 00:04:52.896 } 00:04:52.896 } 00:04:52.896 ]' 00:04:52.896 16:14:12 -- rpc/rpc.sh@21 -- # jq length 00:04:52.896 16:14:12 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:52.896 16:14:12 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:52.896 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.896 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.896 16:14:12 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:52.896 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.896 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.896 16:14:12 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:52.896 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.896 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.896 16:14:12 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:52.896 16:14:12 -- rpc/rpc.sh@26 -- # jq length 00:04:52.896 16:14:12 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:52.896 00:04:52.896 real 0m0.226s 00:04:52.896 user 0m0.121s 00:04:52.896 sys 0m0.026s 00:04:52.896 ************************************ 00:04:52.896 END TEST rpc_integrity 00:04:52.896 ************************************ 00:04:52.896 16:14:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:52.896 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 16:14:12 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:52.896 16:14:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:52.896 16:14:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:52.896 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 ************************************ 00:04:52.896 START TEST rpc_plugins 00:04:52.896 ************************************ 00:04:52.896 16:14:12 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:04:52.896 16:14:12 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:52.896 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.896 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.896 16:14:12 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:52.896 16:14:12 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:52.896 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.896 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.896 16:14:12 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:52.896 { 00:04:52.896 "name": "Malloc1", 00:04:52.896 "aliases": [ 00:04:52.896 "5729c72e-f4ac-4372-820d-a2693c835a5b" 00:04:52.896 ], 00:04:52.896 "product_name": "Malloc disk", 00:04:52.896 "block_size": 4096, 00:04:52.896 "num_blocks": 256, 00:04:52.896 "uuid": "5729c72e-f4ac-4372-820d-a2693c835a5b", 00:04:52.896 "assigned_rate_limits": { 00:04:52.896 "rw_ios_per_sec": 0, 00:04:52.896 "rw_mbytes_per_sec": 0, 00:04:52.896 "r_mbytes_per_sec": 0, 00:04:52.896 "w_mbytes_per_sec": 0 00:04:52.896 }, 00:04:52.896 "claimed": false, 00:04:52.896 "zoned": false, 00:04:52.896 "supported_io_types": { 00:04:52.896 "read": true, 00:04:52.896 "write": true, 00:04:52.896 "unmap": true, 00:04:52.896 "write_zeroes": true, 00:04:52.896 "flush": true, 00:04:52.896 "reset": true, 00:04:52.896 "compare": false, 00:04:52.896 "compare_and_write": false, 00:04:52.896 "abort": true, 00:04:52.896 "nvme_admin": false, 00:04:52.896 "nvme_io": false 00:04:52.896 }, 00:04:52.896 "memory_domains": [ 00:04:52.896 { 00:04:52.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:52.896 "dma_device_type": 2 00:04:52.896 } 00:04:52.896 ], 00:04:52.896 "driver_specific": {} 00:04:52.896 } 00:04:52.896 ]' 00:04:52.896 16:14:12 -- rpc/rpc.sh@32 -- # jq length 00:04:52.896 16:14:12 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:52.896 16:14:12 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:52.896 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.896 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.896 16:14:12 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:52.896 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:52.896 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:52.896 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:52.896 16:14:12 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:52.896 16:14:12 -- rpc/rpc.sh@36 -- # jq length 00:04:53.154 16:14:12 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:53.154 00:04:53.154 real 0m0.105s 00:04:53.154 user 0m0.054s 00:04:53.154 sys 0m0.020s 00:04:53.154 ************************************ 00:04:53.154 END TEST rpc_plugins 00:04:53.154 ************************************ 00:04:53.154 16:14:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.154 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.154 16:14:12 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:53.154 16:14:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.154 16:14:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.154 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.154 ************************************ 00:04:53.154 START TEST rpc_trace_cmd_test 00:04:53.154 ************************************ 00:04:53.154 16:14:12 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:04:53.154 16:14:12 -- rpc/rpc.sh@40 -- # local info 00:04:53.154 16:14:12 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:53.154 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.154 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.154 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.154 16:14:12 -- rpc/rpc.sh@42 -- # info='{ 00:04:53.154 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56159", 00:04:53.154 "tpoint_group_mask": "0x8", 00:04:53.154 "iscsi_conn": { 00:04:53.154 "mask": "0x2", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "scsi": { 00:04:53.154 "mask": "0x4", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "bdev": { 00:04:53.154 "mask": "0x8", 00:04:53.154 "tpoint_mask": "0xffffffffffffffff" 00:04:53.154 }, 00:04:53.154 "nvmf_rdma": { 00:04:53.154 "mask": "0x10", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "nvmf_tcp": { 00:04:53.154 "mask": "0x20", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "ftl": { 00:04:53.154 "mask": "0x40", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "blobfs": { 00:04:53.154 "mask": "0x80", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "dsa": { 00:04:53.154 "mask": "0x200", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "thread": { 00:04:53.154 "mask": "0x400", 00:04:53.154 "tpoint_mask": "0x0" 00:04:53.154 }, 00:04:53.154 "nvme_pcie": { 00:04:53.155 "mask": "0x800", 00:04:53.155 "tpoint_mask": "0x0" 00:04:53.155 }, 00:04:53.155 "iaa": { 00:04:53.155 "mask": "0x1000", 00:04:53.155 "tpoint_mask": "0x0" 00:04:53.155 }, 00:04:53.155 "nvme_tcp": { 00:04:53.155 "mask": "0x2000", 00:04:53.155 "tpoint_mask": "0x0" 00:04:53.155 }, 00:04:53.155 "bdev_nvme": { 00:04:53.155 "mask": "0x4000", 00:04:53.155 "tpoint_mask": "0x0" 00:04:53.155 } 00:04:53.155 }' 00:04:53.155 16:14:12 -- rpc/rpc.sh@43 -- # jq length 00:04:53.155 16:14:12 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:53.155 16:14:12 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:53.155 16:14:12 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:53.155 16:14:12 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:53.155 16:14:12 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:53.155 16:14:12 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:53.155 16:14:12 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:53.155 16:14:12 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:53.155 16:14:12 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:53.155 00:04:53.155 real 0m0.152s 00:04:53.155 user 0m0.133s 00:04:53.155 sys 0m0.013s 00:04:53.155 16:14:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.155 ************************************ 00:04:53.155 END TEST rpc_trace_cmd_test 00:04:53.155 ************************************ 00:04:53.155 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.155 16:14:12 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:53.155 16:14:12 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:53.155 16:14:12 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:53.155 16:14:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:53.155 16:14:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:53.155 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.155 ************************************ 00:04:53.155 START TEST rpc_daemon_integrity 00:04:53.155 ************************************ 00:04:53.155 16:14:12 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:04:53.155 16:14:12 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:53.155 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.155 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.413 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.413 16:14:12 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:53.413 16:14:12 -- rpc/rpc.sh@13 -- # jq length 00:04:53.413 16:14:12 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:53.413 16:14:12 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:53.413 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.413 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.413 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.413 16:14:12 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:53.413 16:14:12 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:53.413 16:14:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.413 16:14:12 -- common/autotest_common.sh@10 -- # set +x 00:04:53.413 16:14:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.413 16:14:12 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:53.413 { 00:04:53.413 "name": "Malloc2", 00:04:53.413 "aliases": [ 00:04:53.413 "4f72789f-d20e-4037-97ec-7f16aa329a7a" 00:04:53.413 ], 00:04:53.413 "product_name": "Malloc disk", 00:04:53.413 "block_size": 512, 00:04:53.413 "num_blocks": 16384, 00:04:53.413 "uuid": "4f72789f-d20e-4037-97ec-7f16aa329a7a", 00:04:53.413 "assigned_rate_limits": { 00:04:53.413 "rw_ios_per_sec": 0, 00:04:53.413 "rw_mbytes_per_sec": 0, 00:04:53.413 "r_mbytes_per_sec": 0, 00:04:53.413 "w_mbytes_per_sec": 0 00:04:53.413 }, 00:04:53.413 "claimed": false, 00:04:53.413 "zoned": false, 00:04:53.413 "supported_io_types": { 00:04:53.413 "read": true, 00:04:53.413 "write": true, 00:04:53.413 "unmap": true, 00:04:53.413 "write_zeroes": true, 00:04:53.413 "flush": true, 00:04:53.413 "reset": true, 00:04:53.413 "compare": false, 00:04:53.413 "compare_and_write": false, 00:04:53.413 "abort": true, 00:04:53.413 "nvme_admin": false, 00:04:53.413 "nvme_io": false 00:04:53.413 }, 00:04:53.413 "memory_domains": [ 00:04:53.413 { 00:04:53.413 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.413 "dma_device_type": 2 00:04:53.413 } 00:04:53.413 ], 00:04:53.413 "driver_specific": {} 00:04:53.413 } 00:04:53.413 ]' 00:04:53.413 16:14:12 -- rpc/rpc.sh@17 -- # jq length 00:04:53.413 16:14:13 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:53.413 16:14:13 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:53.413 16:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.413 16:14:13 -- common/autotest_common.sh@10 -- # set +x 00:04:53.413 [2024-11-09 16:14:13.020100] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:53.414 [2024-11-09 16:14:13.020144] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:53.414 [2024-11-09 16:14:13.020158] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:04:53.414 [2024-11-09 16:14:13.020166] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:53.414 [2024-11-09 16:14:13.021795] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:53.414 [2024-11-09 16:14:13.021825] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:53.414 Passthru0 00:04:53.414 16:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.414 16:14:13 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:53.414 16:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.414 16:14:13 -- common/autotest_common.sh@10 -- # set +x 00:04:53.414 16:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.414 16:14:13 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:53.414 { 00:04:53.414 "name": "Malloc2", 00:04:53.414 "aliases": [ 00:04:53.414 "4f72789f-d20e-4037-97ec-7f16aa329a7a" 00:04:53.414 ], 00:04:53.414 "product_name": "Malloc disk", 00:04:53.414 "block_size": 512, 00:04:53.414 "num_blocks": 16384, 00:04:53.414 "uuid": "4f72789f-d20e-4037-97ec-7f16aa329a7a", 00:04:53.414 "assigned_rate_limits": { 00:04:53.414 "rw_ios_per_sec": 0, 00:04:53.414 "rw_mbytes_per_sec": 0, 00:04:53.414 "r_mbytes_per_sec": 0, 00:04:53.414 "w_mbytes_per_sec": 0 00:04:53.414 }, 00:04:53.414 "claimed": true, 00:04:53.414 "claim_type": "exclusive_write", 00:04:53.414 "zoned": false, 00:04:53.414 "supported_io_types": { 00:04:53.414 "read": true, 00:04:53.414 "write": true, 00:04:53.414 "unmap": true, 00:04:53.414 "write_zeroes": true, 00:04:53.414 "flush": true, 00:04:53.414 "reset": true, 00:04:53.414 "compare": false, 00:04:53.414 "compare_and_write": false, 00:04:53.414 "abort": true, 00:04:53.414 "nvme_admin": false, 00:04:53.414 "nvme_io": false 00:04:53.414 }, 00:04:53.414 "memory_domains": [ 00:04:53.414 { 00:04:53.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.414 "dma_device_type": 2 00:04:53.414 } 00:04:53.414 ], 00:04:53.414 "driver_specific": {} 00:04:53.414 }, 00:04:53.414 { 00:04:53.414 "name": "Passthru0", 00:04:53.414 "aliases": [ 00:04:53.414 "c4cde75a-d8ba-5622-b49e-34a2db2768d0" 00:04:53.414 ], 00:04:53.414 "product_name": "passthru", 00:04:53.414 "block_size": 512, 00:04:53.414 "num_blocks": 16384, 00:04:53.414 "uuid": "c4cde75a-d8ba-5622-b49e-34a2db2768d0", 00:04:53.414 "assigned_rate_limits": { 00:04:53.414 "rw_ios_per_sec": 0, 00:04:53.414 "rw_mbytes_per_sec": 0, 00:04:53.414 "r_mbytes_per_sec": 0, 00:04:53.414 "w_mbytes_per_sec": 0 00:04:53.414 }, 00:04:53.414 "claimed": false, 00:04:53.414 "zoned": false, 00:04:53.414 "supported_io_types": { 00:04:53.414 "read": true, 00:04:53.414 "write": true, 00:04:53.414 "unmap": true, 00:04:53.414 "write_zeroes": true, 00:04:53.414 "flush": true, 00:04:53.414 "reset": true, 00:04:53.414 "compare": false, 00:04:53.414 "compare_and_write": false, 00:04:53.414 "abort": true, 00:04:53.414 "nvme_admin": false, 00:04:53.414 "nvme_io": false 00:04:53.414 }, 00:04:53.414 "memory_domains": [ 00:04:53.414 { 00:04:53.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:53.414 "dma_device_type": 2 00:04:53.414 } 00:04:53.414 ], 00:04:53.414 "driver_specific": { 00:04:53.414 "passthru": { 00:04:53.414 "name": "Passthru0", 00:04:53.414 "base_bdev_name": "Malloc2" 00:04:53.414 } 00:04:53.414 } 00:04:53.414 } 00:04:53.414 ]' 00:04:53.414 16:14:13 -- rpc/rpc.sh@21 -- # jq length 00:04:53.414 16:14:13 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:53.414 16:14:13 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:53.414 16:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.414 16:14:13 -- common/autotest_common.sh@10 -- # set +x 00:04:53.414 16:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.414 16:14:13 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:53.414 16:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.414 16:14:13 -- common/autotest_common.sh@10 -- # set +x 00:04:53.414 16:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.414 16:14:13 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:53.414 16:14:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:53.414 16:14:13 -- common/autotest_common.sh@10 -- # set +x 00:04:53.414 16:14:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:53.414 16:14:13 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:53.414 16:14:13 -- rpc/rpc.sh@26 -- # jq length 00:04:53.414 16:14:13 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:53.414 00:04:53.414 real 0m0.217s 00:04:53.414 user 0m0.120s 00:04:53.414 sys 0m0.027s 00:04:53.414 16:14:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:53.414 16:14:13 -- common/autotest_common.sh@10 -- # set +x 00:04:53.414 ************************************ 00:04:53.414 END TEST rpc_daemon_integrity 00:04:53.414 ************************************ 00:04:53.414 16:14:13 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:53.414 16:14:13 -- rpc/rpc.sh@84 -- # killprocess 56159 00:04:53.414 16:14:13 -- common/autotest_common.sh@936 -- # '[' -z 56159 ']' 00:04:53.414 16:14:13 -- common/autotest_common.sh@940 -- # kill -0 56159 00:04:53.414 16:14:13 -- common/autotest_common.sh@941 -- # uname 00:04:53.414 16:14:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:53.414 16:14:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56159 00:04:53.672 16:14:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:53.672 16:14:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:53.672 killing process with pid 56159 00:04:53.672 16:14:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56159' 00:04:53.672 16:14:13 -- common/autotest_common.sh@955 -- # kill 56159 00:04:53.672 16:14:13 -- common/autotest_common.sh@960 -- # wait 56159 00:04:54.606 00:04:54.606 real 0m3.018s 00:04:54.606 user 0m3.362s 00:04:54.606 sys 0m0.557s 00:04:54.606 ************************************ 00:04:54.606 END TEST rpc 00:04:54.606 ************************************ 00:04:54.606 16:14:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.606 16:14:14 -- common/autotest_common.sh@10 -- # set +x 00:04:54.866 16:14:14 -- spdk/autotest.sh@164 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:54.866 16:14:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.866 16:14:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.866 16:14:14 -- common/autotest_common.sh@10 -- # set +x 00:04:54.866 ************************************ 00:04:54.866 START TEST rpc_client 00:04:54.866 ************************************ 00:04:54.866 16:14:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:54.866 * Looking for test storage... 00:04:54.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:54.866 16:14:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:54.866 16:14:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:54.866 16:14:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:54.866 16:14:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:54.866 16:14:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:54.866 16:14:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:54.866 16:14:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:54.866 16:14:14 -- scripts/common.sh@335 -- # IFS=.-: 00:04:54.866 16:14:14 -- scripts/common.sh@335 -- # read -ra ver1 00:04:54.866 16:14:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:54.866 16:14:14 -- scripts/common.sh@336 -- # read -ra ver2 00:04:54.866 16:14:14 -- scripts/common.sh@337 -- # local 'op=<' 00:04:54.866 16:14:14 -- scripts/common.sh@339 -- # ver1_l=2 00:04:54.866 16:14:14 -- scripts/common.sh@340 -- # ver2_l=1 00:04:54.866 16:14:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:54.866 16:14:14 -- scripts/common.sh@343 -- # case "$op" in 00:04:54.866 16:14:14 -- scripts/common.sh@344 -- # : 1 00:04:54.866 16:14:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:54.866 16:14:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:54.866 16:14:14 -- scripts/common.sh@364 -- # decimal 1 00:04:54.866 16:14:14 -- scripts/common.sh@352 -- # local d=1 00:04:54.866 16:14:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:54.866 16:14:14 -- scripts/common.sh@354 -- # echo 1 00:04:54.866 16:14:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:54.866 16:14:14 -- scripts/common.sh@365 -- # decimal 2 00:04:54.866 16:14:14 -- scripts/common.sh@352 -- # local d=2 00:04:54.866 16:14:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:54.866 16:14:14 -- scripts/common.sh@354 -- # echo 2 00:04:54.866 16:14:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:54.866 16:14:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:54.866 16:14:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:54.866 16:14:14 -- scripts/common.sh@367 -- # return 0 00:04:54.866 16:14:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:54.866 16:14:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:54.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.866 --rc genhtml_branch_coverage=1 00:04:54.866 --rc genhtml_function_coverage=1 00:04:54.866 --rc genhtml_legend=1 00:04:54.866 --rc geninfo_all_blocks=1 00:04:54.866 --rc geninfo_unexecuted_blocks=1 00:04:54.866 00:04:54.866 ' 00:04:54.866 16:14:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:54.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.866 --rc genhtml_branch_coverage=1 00:04:54.866 --rc genhtml_function_coverage=1 00:04:54.866 --rc genhtml_legend=1 00:04:54.866 --rc geninfo_all_blocks=1 00:04:54.866 --rc geninfo_unexecuted_blocks=1 00:04:54.866 00:04:54.866 ' 00:04:54.866 16:14:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:54.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.866 --rc genhtml_branch_coverage=1 00:04:54.866 --rc genhtml_function_coverage=1 00:04:54.866 --rc genhtml_legend=1 00:04:54.866 --rc geninfo_all_blocks=1 00:04:54.866 --rc geninfo_unexecuted_blocks=1 00:04:54.866 00:04:54.866 ' 00:04:54.866 16:14:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:54.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:54.866 --rc genhtml_branch_coverage=1 00:04:54.866 --rc genhtml_function_coverage=1 00:04:54.866 --rc genhtml_legend=1 00:04:54.866 --rc geninfo_all_blocks=1 00:04:54.866 --rc geninfo_unexecuted_blocks=1 00:04:54.866 00:04:54.866 ' 00:04:54.866 16:14:14 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:54.866 OK 00:04:54.866 16:14:14 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:54.866 00:04:54.866 real 0m0.177s 00:04:54.866 user 0m0.110s 00:04:54.866 sys 0m0.075s 00:04:54.866 16:14:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:54.866 ************************************ 00:04:54.866 END TEST rpc_client 00:04:54.866 16:14:14 -- common/autotest_common.sh@10 -- # set +x 00:04:54.866 ************************************ 00:04:54.866 16:14:14 -- spdk/autotest.sh@165 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:54.866 16:14:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.866 16:14:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.866 16:14:14 -- common/autotest_common.sh@10 -- # set +x 00:04:54.866 ************************************ 00:04:54.866 START TEST json_config 00:04:54.866 ************************************ 00:04:54.866 16:14:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:55.131 16:14:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:55.131 16:14:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:55.131 16:14:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:55.131 16:14:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:55.131 16:14:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:55.131 16:14:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:55.131 16:14:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:55.131 16:14:14 -- scripts/common.sh@335 -- # IFS=.-: 00:04:55.131 16:14:14 -- scripts/common.sh@335 -- # read -ra ver1 00:04:55.131 16:14:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.131 16:14:14 -- scripts/common.sh@336 -- # read -ra ver2 00:04:55.131 16:14:14 -- scripts/common.sh@337 -- # local 'op=<' 00:04:55.131 16:14:14 -- scripts/common.sh@339 -- # ver1_l=2 00:04:55.131 16:14:14 -- scripts/common.sh@340 -- # ver2_l=1 00:04:55.131 16:14:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:55.131 16:14:14 -- scripts/common.sh@343 -- # case "$op" in 00:04:55.131 16:14:14 -- scripts/common.sh@344 -- # : 1 00:04:55.131 16:14:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:55.131 16:14:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.131 16:14:14 -- scripts/common.sh@364 -- # decimal 1 00:04:55.131 16:14:14 -- scripts/common.sh@352 -- # local d=1 00:04:55.131 16:14:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.131 16:14:14 -- scripts/common.sh@354 -- # echo 1 00:04:55.131 16:14:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:55.131 16:14:14 -- scripts/common.sh@365 -- # decimal 2 00:04:55.131 16:14:14 -- scripts/common.sh@352 -- # local d=2 00:04:55.131 16:14:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.131 16:14:14 -- scripts/common.sh@354 -- # echo 2 00:04:55.131 16:14:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:55.131 16:14:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:55.131 16:14:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:55.131 16:14:14 -- scripts/common.sh@367 -- # return 0 00:04:55.131 16:14:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.131 16:14:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:55.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.131 --rc genhtml_branch_coverage=1 00:04:55.131 --rc genhtml_function_coverage=1 00:04:55.131 --rc genhtml_legend=1 00:04:55.131 --rc geninfo_all_blocks=1 00:04:55.131 --rc geninfo_unexecuted_blocks=1 00:04:55.131 00:04:55.131 ' 00:04:55.131 16:14:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:55.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.131 --rc genhtml_branch_coverage=1 00:04:55.131 --rc genhtml_function_coverage=1 00:04:55.131 --rc genhtml_legend=1 00:04:55.131 --rc geninfo_all_blocks=1 00:04:55.131 --rc geninfo_unexecuted_blocks=1 00:04:55.131 00:04:55.131 ' 00:04:55.131 16:14:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:55.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.131 --rc genhtml_branch_coverage=1 00:04:55.131 --rc genhtml_function_coverage=1 00:04:55.131 --rc genhtml_legend=1 00:04:55.131 --rc geninfo_all_blocks=1 00:04:55.131 --rc geninfo_unexecuted_blocks=1 00:04:55.131 00:04:55.131 ' 00:04:55.131 16:14:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:55.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.131 --rc genhtml_branch_coverage=1 00:04:55.131 --rc genhtml_function_coverage=1 00:04:55.131 --rc genhtml_legend=1 00:04:55.131 --rc geninfo_all_blocks=1 00:04:55.131 --rc geninfo_unexecuted_blocks=1 00:04:55.131 00:04:55.131 ' 00:04:55.131 16:14:14 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:55.131 16:14:14 -- nvmf/common.sh@7 -- # uname -s 00:04:55.131 16:14:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.131 16:14:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.131 16:14:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.131 16:14:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.131 16:14:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.131 16:14:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.131 16:14:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.131 16:14:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.131 16:14:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.131 16:14:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.131 16:14:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ca9637a-df03-470e-a17c-bcf9a22a1537 00:04:55.131 16:14:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ca9637a-df03-470e-a17c-bcf9a22a1537 00:04:55.131 16:14:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.131 16:14:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.131 16:14:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.131 16:14:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:55.131 16:14:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.131 16:14:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.131 16:14:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.131 16:14:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.131 16:14:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.131 16:14:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.131 16:14:14 -- paths/export.sh@5 -- # export PATH 00:04:55.131 16:14:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.131 16:14:14 -- nvmf/common.sh@46 -- # : 0 00:04:55.131 16:14:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:55.131 16:14:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:55.131 16:14:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:55.131 16:14:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.131 16:14:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.131 16:14:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:55.131 16:14:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:55.131 16:14:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:55.131 WARNING: No tests are enabled so not running JSON configuration tests 00:04:55.131 16:14:14 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:04:55.131 16:14:14 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:04:55.131 16:14:14 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:04:55.131 16:14:14 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:55.131 16:14:14 -- json_config/json_config.sh@26 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:55.131 16:14:14 -- json_config/json_config.sh@27 -- # exit 0 00:04:55.131 00:04:55.131 real 0m0.133s 00:04:55.131 user 0m0.086s 00:04:55.131 sys 0m0.051s 00:04:55.131 16:14:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:55.131 ************************************ 00:04:55.131 END TEST json_config 00:04:55.131 ************************************ 00:04:55.131 16:14:14 -- common/autotest_common.sh@10 -- # set +x 00:04:55.131 16:14:14 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:55.131 16:14:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:55.131 16:14:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:55.131 16:14:14 -- common/autotest_common.sh@10 -- # set +x 00:04:55.131 ************************************ 00:04:55.131 START TEST json_config_extra_key 00:04:55.131 ************************************ 00:04:55.131 16:14:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:55.131 16:14:14 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:55.131 16:14:14 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:55.131 16:14:14 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:55.404 16:14:14 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:55.404 16:14:14 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:55.404 16:14:14 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:55.404 16:14:14 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:55.404 16:14:14 -- scripts/common.sh@335 -- # IFS=.-: 00:04:55.404 16:14:14 -- scripts/common.sh@335 -- # read -ra ver1 00:04:55.404 16:14:14 -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.404 16:14:14 -- scripts/common.sh@336 -- # read -ra ver2 00:04:55.404 16:14:14 -- scripts/common.sh@337 -- # local 'op=<' 00:04:55.404 16:14:14 -- scripts/common.sh@339 -- # ver1_l=2 00:04:55.404 16:14:14 -- scripts/common.sh@340 -- # ver2_l=1 00:04:55.404 16:14:14 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:55.404 16:14:14 -- scripts/common.sh@343 -- # case "$op" in 00:04:55.404 16:14:14 -- scripts/common.sh@344 -- # : 1 00:04:55.404 16:14:14 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:55.404 16:14:14 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.404 16:14:14 -- scripts/common.sh@364 -- # decimal 1 00:04:55.404 16:14:14 -- scripts/common.sh@352 -- # local d=1 00:04:55.404 16:14:14 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.404 16:14:14 -- scripts/common.sh@354 -- # echo 1 00:04:55.404 16:14:14 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:55.404 16:14:14 -- scripts/common.sh@365 -- # decimal 2 00:04:55.404 16:14:14 -- scripts/common.sh@352 -- # local d=2 00:04:55.404 16:14:14 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.404 16:14:14 -- scripts/common.sh@354 -- # echo 2 00:04:55.404 16:14:14 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:55.404 16:14:14 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:55.404 16:14:14 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:55.404 16:14:14 -- scripts/common.sh@367 -- # return 0 00:04:55.404 16:14:14 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.404 16:14:14 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:55.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.404 --rc genhtml_branch_coverage=1 00:04:55.404 --rc genhtml_function_coverage=1 00:04:55.404 --rc genhtml_legend=1 00:04:55.404 --rc geninfo_all_blocks=1 00:04:55.404 --rc geninfo_unexecuted_blocks=1 00:04:55.404 00:04:55.404 ' 00:04:55.404 16:14:14 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:55.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.404 --rc genhtml_branch_coverage=1 00:04:55.404 --rc genhtml_function_coverage=1 00:04:55.404 --rc genhtml_legend=1 00:04:55.404 --rc geninfo_all_blocks=1 00:04:55.404 --rc geninfo_unexecuted_blocks=1 00:04:55.404 00:04:55.404 ' 00:04:55.404 16:14:14 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:55.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.404 --rc genhtml_branch_coverage=1 00:04:55.404 --rc genhtml_function_coverage=1 00:04:55.404 --rc genhtml_legend=1 00:04:55.404 --rc geninfo_all_blocks=1 00:04:55.404 --rc geninfo_unexecuted_blocks=1 00:04:55.404 00:04:55.404 ' 00:04:55.404 16:14:14 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:55.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.404 --rc genhtml_branch_coverage=1 00:04:55.404 --rc genhtml_function_coverage=1 00:04:55.404 --rc genhtml_legend=1 00:04:55.404 --rc geninfo_all_blocks=1 00:04:55.404 --rc geninfo_unexecuted_blocks=1 00:04:55.404 00:04:55.404 ' 00:04:55.404 16:14:14 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:55.404 16:14:14 -- nvmf/common.sh@7 -- # uname -s 00:04:55.404 16:14:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:55.404 16:14:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:55.404 16:14:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:55.404 16:14:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:55.404 16:14:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:55.404 16:14:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:55.404 16:14:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:55.405 16:14:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:55.405 16:14:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:55.405 16:14:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:55.405 16:14:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:2ca9637a-df03-470e-a17c-bcf9a22a1537 00:04:55.405 16:14:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=2ca9637a-df03-470e-a17c-bcf9a22a1537 00:04:55.405 16:14:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:55.405 16:14:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:55.405 16:14:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:55.405 16:14:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:55.405 16:14:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:55.405 16:14:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:55.405 16:14:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:55.405 16:14:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.405 16:14:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.405 16:14:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.405 16:14:14 -- paths/export.sh@5 -- # export PATH 00:04:55.405 16:14:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:55.405 16:14:14 -- nvmf/common.sh@46 -- # : 0 00:04:55.405 16:14:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:55.405 16:14:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:55.405 16:14:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:55.405 16:14:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:55.405 16:14:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:55.405 16:14:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:55.405 16:14:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:55.405 16:14:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:55.405 INFO: launching applications... 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=56453 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:55.405 Waiting for target to run... 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 56453 /var/tmp/spdk_tgt.sock 00:04:55.405 16:14:14 -- common/autotest_common.sh@829 -- # '[' -z 56453 ']' 00:04:55.405 16:14:14 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:55.405 16:14:14 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.405 16:14:14 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:55.405 16:14:14 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:55.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:55.405 16:14:14 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.405 16:14:14 -- common/autotest_common.sh@10 -- # set +x 00:04:55.405 [2024-11-09 16:14:14.994678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:55.405 [2024-11-09 16:14:14.994784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56453 ] 00:04:55.664 [2024-11-09 16:14:15.285691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.664 [2024-11-09 16:14:15.426670] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:55.664 [2024-11-09 16:14:15.426820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.230 00:04:56.230 INFO: shutting down applications... 00:04:56.230 16:14:15 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:56.230 16:14:15 -- common/autotest_common.sh@862 -- # return 0 00:04:56.230 16:14:15 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:56.230 16:14:15 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:56.230 16:14:15 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:56.230 16:14:15 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:56.230 16:14:15 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:56.230 16:14:15 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 56453 ]] 00:04:56.230 16:14:15 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 56453 00:04:56.230 16:14:15 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:56.230 16:14:15 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:56.230 16:14:15 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56453 00:04:56.230 16:14:15 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:56.796 16:14:16 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:56.796 16:14:16 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:56.796 16:14:16 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56453 00:04:56.796 16:14:16 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:57.053 16:14:16 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:57.053 16:14:16 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:57.053 16:14:16 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56453 00:04:57.053 16:14:16 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:57.621 16:14:17 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:57.621 16:14:17 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:57.621 16:14:17 -- json_config/json_config_extra_key.sh@50 -- # kill -0 56453 00:04:57.621 16:14:17 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:57.621 16:14:17 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:57.621 SPDK target shutdown done 00:04:57.621 Success 00:04:57.621 16:14:17 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:57.621 16:14:17 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:57.621 16:14:17 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:57.621 00:04:57.621 real 0m2.525s 00:04:57.621 user 0m2.247s 00:04:57.621 sys 0m0.358s 00:04:57.621 16:14:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:57.621 ************************************ 00:04:57.621 END TEST json_config_extra_key 00:04:57.621 ************************************ 00:04:57.621 16:14:17 -- common/autotest_common.sh@10 -- # set +x 00:04:57.621 16:14:17 -- spdk/autotest.sh@167 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.621 16:14:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:57.621 16:14:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:57.621 16:14:17 -- common/autotest_common.sh@10 -- # set +x 00:04:57.621 ************************************ 00:04:57.621 START TEST alias_rpc 00:04:57.621 ************************************ 00:04:57.621 16:14:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:57.881 * Looking for test storage... 00:04:57.881 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:57.881 16:14:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:57.881 16:14:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:57.881 16:14:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:57.881 16:14:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:57.881 16:14:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:57.881 16:14:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:57.881 16:14:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:57.881 16:14:17 -- scripts/common.sh@335 -- # IFS=.-: 00:04:57.881 16:14:17 -- scripts/common.sh@335 -- # read -ra ver1 00:04:57.881 16:14:17 -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.881 16:14:17 -- scripts/common.sh@336 -- # read -ra ver2 00:04:57.881 16:14:17 -- scripts/common.sh@337 -- # local 'op=<' 00:04:57.881 16:14:17 -- scripts/common.sh@339 -- # ver1_l=2 00:04:57.881 16:14:17 -- scripts/common.sh@340 -- # ver2_l=1 00:04:57.881 16:14:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:57.881 16:14:17 -- scripts/common.sh@343 -- # case "$op" in 00:04:57.881 16:14:17 -- scripts/common.sh@344 -- # : 1 00:04:57.881 16:14:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:57.881 16:14:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.881 16:14:17 -- scripts/common.sh@364 -- # decimal 1 00:04:57.881 16:14:17 -- scripts/common.sh@352 -- # local d=1 00:04:57.881 16:14:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.881 16:14:17 -- scripts/common.sh@354 -- # echo 1 00:04:57.881 16:14:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:57.881 16:14:17 -- scripts/common.sh@365 -- # decimal 2 00:04:57.881 16:14:17 -- scripts/common.sh@352 -- # local d=2 00:04:57.881 16:14:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.881 16:14:17 -- scripts/common.sh@354 -- # echo 2 00:04:57.881 16:14:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:57.881 16:14:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:57.881 16:14:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:57.881 16:14:17 -- scripts/common.sh@367 -- # return 0 00:04:57.881 16:14:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.881 16:14:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:57.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.881 --rc genhtml_branch_coverage=1 00:04:57.881 --rc genhtml_function_coverage=1 00:04:57.881 --rc genhtml_legend=1 00:04:57.881 --rc geninfo_all_blocks=1 00:04:57.881 --rc geninfo_unexecuted_blocks=1 00:04:57.881 00:04:57.881 ' 00:04:57.881 16:14:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:57.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.881 --rc genhtml_branch_coverage=1 00:04:57.881 --rc genhtml_function_coverage=1 00:04:57.881 --rc genhtml_legend=1 00:04:57.881 --rc geninfo_all_blocks=1 00:04:57.881 --rc geninfo_unexecuted_blocks=1 00:04:57.881 00:04:57.881 ' 00:04:57.881 16:14:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:57.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.881 --rc genhtml_branch_coverage=1 00:04:57.881 --rc genhtml_function_coverage=1 00:04:57.881 --rc genhtml_legend=1 00:04:57.881 --rc geninfo_all_blocks=1 00:04:57.881 --rc geninfo_unexecuted_blocks=1 00:04:57.881 00:04:57.881 ' 00:04:57.881 16:14:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:57.881 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.881 --rc genhtml_branch_coverage=1 00:04:57.882 --rc genhtml_function_coverage=1 00:04:57.882 --rc genhtml_legend=1 00:04:57.882 --rc geninfo_all_blocks=1 00:04:57.882 --rc geninfo_unexecuted_blocks=1 00:04:57.882 00:04:57.882 ' 00:04:57.882 16:14:17 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:57.882 16:14:17 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=56544 00:04:57.882 16:14:17 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 56544 00:04:57.882 16:14:17 -- common/autotest_common.sh@829 -- # '[' -z 56544 ']' 00:04:57.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.882 16:14:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.882 16:14:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:57.882 16:14:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.882 16:14:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:57.882 16:14:17 -- common/autotest_common.sh@10 -- # set +x 00:04:57.882 16:14:17 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.882 [2024-11-09 16:14:17.590042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:04:57.882 [2024-11-09 16:14:17.590162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56544 ] 00:04:58.141 [2024-11-09 16:14:17.737293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.141 [2024-11-09 16:14:17.873455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:58.141 [2024-11-09 16:14:17.873611] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.707 16:14:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:58.707 16:14:18 -- common/autotest_common.sh@862 -- # return 0 00:04:58.707 16:14:18 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:58.965 16:14:18 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 56544 00:04:58.965 16:14:18 -- common/autotest_common.sh@936 -- # '[' -z 56544 ']' 00:04:58.965 16:14:18 -- common/autotest_common.sh@940 -- # kill -0 56544 00:04:58.965 16:14:18 -- common/autotest_common.sh@941 -- # uname 00:04:58.965 16:14:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:58.965 16:14:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56544 00:04:58.965 16:14:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:58.965 16:14:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:58.965 killing process with pid 56544 00:04:58.965 16:14:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56544' 00:04:58.965 16:14:18 -- common/autotest_common.sh@955 -- # kill 56544 00:04:58.965 16:14:18 -- common/autotest_common.sh@960 -- # wait 56544 00:05:00.343 00:05:00.343 real 0m2.404s 00:05:00.343 user 0m2.515s 00:05:00.343 sys 0m0.338s 00:05:00.343 16:14:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:00.343 ************************************ 00:05:00.343 END TEST alias_rpc 00:05:00.343 ************************************ 00:05:00.343 16:14:19 -- common/autotest_common.sh@10 -- # set +x 00:05:00.343 16:14:19 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:05:00.343 16:14:19 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:00.343 16:14:19 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:00.343 16:14:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:00.343 16:14:19 -- common/autotest_common.sh@10 -- # set +x 00:05:00.343 ************************************ 00:05:00.343 START TEST spdkcli_tcp 00:05:00.343 ************************************ 00:05:00.343 16:14:19 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:00.343 * Looking for test storage... 00:05:00.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:00.343 16:14:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:00.343 16:14:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:00.343 16:14:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:00.343 16:14:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:00.343 16:14:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:00.343 16:14:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:00.343 16:14:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:00.343 16:14:19 -- scripts/common.sh@335 -- # IFS=.-: 00:05:00.343 16:14:19 -- scripts/common.sh@335 -- # read -ra ver1 00:05:00.343 16:14:19 -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.343 16:14:19 -- scripts/common.sh@336 -- # read -ra ver2 00:05:00.343 16:14:19 -- scripts/common.sh@337 -- # local 'op=<' 00:05:00.343 16:14:19 -- scripts/common.sh@339 -- # ver1_l=2 00:05:00.343 16:14:19 -- scripts/common.sh@340 -- # ver2_l=1 00:05:00.343 16:14:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:00.343 16:14:19 -- scripts/common.sh@343 -- # case "$op" in 00:05:00.343 16:14:19 -- scripts/common.sh@344 -- # : 1 00:05:00.343 16:14:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:00.343 16:14:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.343 16:14:19 -- scripts/common.sh@364 -- # decimal 1 00:05:00.343 16:14:19 -- scripts/common.sh@352 -- # local d=1 00:05:00.343 16:14:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.343 16:14:19 -- scripts/common.sh@354 -- # echo 1 00:05:00.343 16:14:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:00.343 16:14:19 -- scripts/common.sh@365 -- # decimal 2 00:05:00.343 16:14:19 -- scripts/common.sh@352 -- # local d=2 00:05:00.343 16:14:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.343 16:14:19 -- scripts/common.sh@354 -- # echo 2 00:05:00.343 16:14:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:00.343 16:14:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:00.343 16:14:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:00.343 16:14:19 -- scripts/common.sh@367 -- # return 0 00:05:00.343 16:14:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.343 16:14:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:00.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.343 --rc genhtml_branch_coverage=1 00:05:00.343 --rc genhtml_function_coverage=1 00:05:00.343 --rc genhtml_legend=1 00:05:00.343 --rc geninfo_all_blocks=1 00:05:00.343 --rc geninfo_unexecuted_blocks=1 00:05:00.343 00:05:00.343 ' 00:05:00.343 16:14:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:00.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.343 --rc genhtml_branch_coverage=1 00:05:00.343 --rc genhtml_function_coverage=1 00:05:00.343 --rc genhtml_legend=1 00:05:00.343 --rc geninfo_all_blocks=1 00:05:00.343 --rc geninfo_unexecuted_blocks=1 00:05:00.343 00:05:00.343 ' 00:05:00.343 16:14:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:00.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.343 --rc genhtml_branch_coverage=1 00:05:00.343 --rc genhtml_function_coverage=1 00:05:00.343 --rc genhtml_legend=1 00:05:00.343 --rc geninfo_all_blocks=1 00:05:00.343 --rc geninfo_unexecuted_blocks=1 00:05:00.343 00:05:00.343 ' 00:05:00.343 16:14:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:00.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.343 --rc genhtml_branch_coverage=1 00:05:00.343 --rc genhtml_function_coverage=1 00:05:00.343 --rc genhtml_legend=1 00:05:00.343 --rc geninfo_all_blocks=1 00:05:00.343 --rc geninfo_unexecuted_blocks=1 00:05:00.343 00:05:00.343 ' 00:05:00.343 16:14:19 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:00.343 16:14:19 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:00.343 16:14:19 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:00.343 16:14:19 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:00.343 16:14:19 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:00.343 16:14:19 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:00.343 16:14:19 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:00.343 16:14:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:00.343 16:14:19 -- common/autotest_common.sh@10 -- # set +x 00:05:00.343 16:14:19 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=56628 00:05:00.343 16:14:19 -- spdkcli/tcp.sh@27 -- # waitforlisten 56628 00:05:00.343 16:14:19 -- common/autotest_common.sh@829 -- # '[' -z 56628 ']' 00:05:00.343 16:14:19 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:00.343 16:14:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.343 16:14:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:00.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.343 16:14:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.343 16:14:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:00.343 16:14:19 -- common/autotest_common.sh@10 -- # set +x 00:05:00.343 [2024-11-09 16:14:20.042977] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:00.343 [2024-11-09 16:14:20.043433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56628 ] 00:05:00.602 [2024-11-09 16:14:20.190114] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.602 [2024-11-09 16:14:20.333351] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:00.603 [2024-11-09 16:14:20.333644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.603 [2024-11-09 16:14:20.333794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.169 16:14:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.169 16:14:20 -- common/autotest_common.sh@862 -- # return 0 00:05:01.169 16:14:20 -- spdkcli/tcp.sh@31 -- # socat_pid=56645 00:05:01.169 16:14:20 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:01.169 16:14:20 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:01.430 [ 00:05:01.430 "bdev_malloc_delete", 00:05:01.430 "bdev_malloc_create", 00:05:01.430 "bdev_null_resize", 00:05:01.430 "bdev_null_delete", 00:05:01.430 "bdev_null_create", 00:05:01.430 "bdev_nvme_cuse_unregister", 00:05:01.430 "bdev_nvme_cuse_register", 00:05:01.430 "bdev_opal_new_user", 00:05:01.430 "bdev_opal_set_lock_state", 00:05:01.430 "bdev_opal_delete", 00:05:01.430 "bdev_opal_get_info", 00:05:01.430 "bdev_opal_create", 00:05:01.430 "bdev_nvme_opal_revert", 00:05:01.430 "bdev_nvme_opal_init", 00:05:01.430 "bdev_nvme_send_cmd", 00:05:01.430 "bdev_nvme_get_path_iostat", 00:05:01.430 "bdev_nvme_get_mdns_discovery_info", 00:05:01.430 "bdev_nvme_stop_mdns_discovery", 00:05:01.430 "bdev_nvme_start_mdns_discovery", 00:05:01.430 "bdev_nvme_set_multipath_policy", 00:05:01.430 "bdev_nvme_set_preferred_path", 00:05:01.430 "bdev_nvme_get_io_paths", 00:05:01.430 "bdev_nvme_remove_error_injection", 00:05:01.430 "bdev_nvme_add_error_injection", 00:05:01.430 "bdev_nvme_get_discovery_info", 00:05:01.430 "bdev_nvme_stop_discovery", 00:05:01.430 "bdev_nvme_start_discovery", 00:05:01.430 "bdev_nvme_get_controller_health_info", 00:05:01.430 "bdev_nvme_disable_controller", 00:05:01.430 "bdev_nvme_enable_controller", 00:05:01.430 "bdev_nvme_reset_controller", 00:05:01.430 "bdev_nvme_get_transport_statistics", 00:05:01.430 "bdev_nvme_apply_firmware", 00:05:01.430 "bdev_nvme_detach_controller", 00:05:01.430 "bdev_nvme_get_controllers", 00:05:01.430 "bdev_nvme_attach_controller", 00:05:01.430 "bdev_nvme_set_hotplug", 00:05:01.430 "bdev_nvme_set_options", 00:05:01.430 "bdev_passthru_delete", 00:05:01.430 "bdev_passthru_create", 00:05:01.430 "bdev_lvol_grow_lvstore", 00:05:01.430 "bdev_lvol_get_lvols", 00:05:01.430 "bdev_lvol_get_lvstores", 00:05:01.430 "bdev_lvol_delete", 00:05:01.430 "bdev_lvol_set_read_only", 00:05:01.430 "bdev_lvol_resize", 00:05:01.430 "bdev_lvol_decouple_parent", 00:05:01.430 "bdev_lvol_inflate", 00:05:01.430 "bdev_lvol_rename", 00:05:01.430 "bdev_lvol_clone_bdev", 00:05:01.430 "bdev_lvol_clone", 00:05:01.430 "bdev_lvol_snapshot", 00:05:01.430 "bdev_lvol_create", 00:05:01.430 "bdev_lvol_delete_lvstore", 00:05:01.430 "bdev_lvol_rename_lvstore", 00:05:01.430 "bdev_lvol_create_lvstore", 00:05:01.430 "bdev_raid_set_options", 00:05:01.430 "bdev_raid_remove_base_bdev", 00:05:01.430 "bdev_raid_add_base_bdev", 00:05:01.430 "bdev_raid_delete", 00:05:01.430 "bdev_raid_create", 00:05:01.430 "bdev_raid_get_bdevs", 00:05:01.430 "bdev_error_inject_error", 00:05:01.430 "bdev_error_delete", 00:05:01.430 "bdev_error_create", 00:05:01.430 "bdev_split_delete", 00:05:01.430 "bdev_split_create", 00:05:01.430 "bdev_delay_delete", 00:05:01.430 "bdev_delay_create", 00:05:01.430 "bdev_delay_update_latency", 00:05:01.430 "bdev_zone_block_delete", 00:05:01.430 "bdev_zone_block_create", 00:05:01.430 "blobfs_create", 00:05:01.430 "blobfs_detect", 00:05:01.430 "blobfs_set_cache_size", 00:05:01.430 "bdev_xnvme_delete", 00:05:01.430 "bdev_xnvme_create", 00:05:01.430 "bdev_aio_delete", 00:05:01.430 "bdev_aio_rescan", 00:05:01.430 "bdev_aio_create", 00:05:01.430 "bdev_ftl_set_property", 00:05:01.430 "bdev_ftl_get_properties", 00:05:01.430 "bdev_ftl_get_stats", 00:05:01.430 "bdev_ftl_unmap", 00:05:01.430 "bdev_ftl_unload", 00:05:01.430 "bdev_ftl_delete", 00:05:01.430 "bdev_ftl_load", 00:05:01.430 "bdev_ftl_create", 00:05:01.430 "bdev_virtio_attach_controller", 00:05:01.430 "bdev_virtio_scsi_get_devices", 00:05:01.430 "bdev_virtio_detach_controller", 00:05:01.430 "bdev_virtio_blk_set_hotplug", 00:05:01.430 "bdev_iscsi_delete", 00:05:01.430 "bdev_iscsi_create", 00:05:01.430 "bdev_iscsi_set_options", 00:05:01.430 "accel_error_inject_error", 00:05:01.430 "ioat_scan_accel_module", 00:05:01.430 "dsa_scan_accel_module", 00:05:01.430 "iaa_scan_accel_module", 00:05:01.430 "iscsi_set_options", 00:05:01.430 "iscsi_get_auth_groups", 00:05:01.430 "iscsi_auth_group_remove_secret", 00:05:01.430 "iscsi_auth_group_add_secret", 00:05:01.430 "iscsi_delete_auth_group", 00:05:01.430 "iscsi_create_auth_group", 00:05:01.430 "iscsi_set_discovery_auth", 00:05:01.430 "iscsi_get_options", 00:05:01.430 "iscsi_target_node_request_logout", 00:05:01.430 "iscsi_target_node_set_redirect", 00:05:01.430 "iscsi_target_node_set_auth", 00:05:01.430 "iscsi_target_node_add_lun", 00:05:01.430 "iscsi_get_connections", 00:05:01.430 "iscsi_portal_group_set_auth", 00:05:01.430 "iscsi_start_portal_group", 00:05:01.430 "iscsi_delete_portal_group", 00:05:01.430 "iscsi_create_portal_group", 00:05:01.430 "iscsi_get_portal_groups", 00:05:01.430 "iscsi_delete_target_node", 00:05:01.430 "iscsi_target_node_remove_pg_ig_maps", 00:05:01.430 "iscsi_target_node_add_pg_ig_maps", 00:05:01.430 "iscsi_create_target_node", 00:05:01.430 "iscsi_get_target_nodes", 00:05:01.430 "iscsi_delete_initiator_group", 00:05:01.430 "iscsi_initiator_group_remove_initiators", 00:05:01.430 "iscsi_initiator_group_add_initiators", 00:05:01.430 "iscsi_create_initiator_group", 00:05:01.430 "iscsi_get_initiator_groups", 00:05:01.430 "nvmf_set_crdt", 00:05:01.430 "nvmf_set_config", 00:05:01.430 "nvmf_set_max_subsystems", 00:05:01.430 "nvmf_subsystem_get_listeners", 00:05:01.430 "nvmf_subsystem_get_qpairs", 00:05:01.430 "nvmf_subsystem_get_controllers", 00:05:01.430 "nvmf_get_stats", 00:05:01.430 "nvmf_get_transports", 00:05:01.430 "nvmf_create_transport", 00:05:01.430 "nvmf_get_targets", 00:05:01.430 "nvmf_delete_target", 00:05:01.430 "nvmf_create_target", 00:05:01.430 "nvmf_subsystem_allow_any_host", 00:05:01.430 "nvmf_subsystem_remove_host", 00:05:01.430 "nvmf_subsystem_add_host", 00:05:01.430 "nvmf_subsystem_remove_ns", 00:05:01.430 "nvmf_subsystem_add_ns", 00:05:01.430 "nvmf_subsystem_listener_set_ana_state", 00:05:01.430 "nvmf_discovery_get_referrals", 00:05:01.430 "nvmf_discovery_remove_referral", 00:05:01.430 "nvmf_discovery_add_referral", 00:05:01.430 "nvmf_subsystem_remove_listener", 00:05:01.430 "nvmf_subsystem_add_listener", 00:05:01.430 "nvmf_delete_subsystem", 00:05:01.430 "nvmf_create_subsystem", 00:05:01.430 "nvmf_get_subsystems", 00:05:01.430 "env_dpdk_get_mem_stats", 00:05:01.430 "nbd_get_disks", 00:05:01.430 "nbd_stop_disk", 00:05:01.430 "nbd_start_disk", 00:05:01.430 "ublk_recover_disk", 00:05:01.430 "ublk_get_disks", 00:05:01.430 "ublk_stop_disk", 00:05:01.430 "ublk_start_disk", 00:05:01.430 "ublk_destroy_target", 00:05:01.430 "ublk_create_target", 00:05:01.430 "virtio_blk_create_transport", 00:05:01.430 "virtio_blk_get_transports", 00:05:01.430 "vhost_controller_set_coalescing", 00:05:01.430 "vhost_get_controllers", 00:05:01.430 "vhost_delete_controller", 00:05:01.430 "vhost_create_blk_controller", 00:05:01.430 "vhost_scsi_controller_remove_target", 00:05:01.430 "vhost_scsi_controller_add_target", 00:05:01.430 "vhost_start_scsi_controller", 00:05:01.430 "vhost_create_scsi_controller", 00:05:01.430 "thread_set_cpumask", 00:05:01.430 "framework_get_scheduler", 00:05:01.430 "framework_set_scheduler", 00:05:01.430 "framework_get_reactors", 00:05:01.430 "thread_get_io_channels", 00:05:01.430 "thread_get_pollers", 00:05:01.430 "thread_get_stats", 00:05:01.430 "framework_monitor_context_switch", 00:05:01.430 "spdk_kill_instance", 00:05:01.430 "log_enable_timestamps", 00:05:01.430 "log_get_flags", 00:05:01.430 "log_clear_flag", 00:05:01.430 "log_set_flag", 00:05:01.430 "log_get_level", 00:05:01.430 "log_set_level", 00:05:01.430 "log_get_print_level", 00:05:01.430 "log_set_print_level", 00:05:01.430 "framework_enable_cpumask_locks", 00:05:01.430 "framework_disable_cpumask_locks", 00:05:01.430 "framework_wait_init", 00:05:01.430 "framework_start_init", 00:05:01.430 "scsi_get_devices", 00:05:01.430 "bdev_get_histogram", 00:05:01.430 "bdev_enable_histogram", 00:05:01.430 "bdev_set_qos_limit", 00:05:01.430 "bdev_set_qd_sampling_period", 00:05:01.431 "bdev_get_bdevs", 00:05:01.431 "bdev_reset_iostat", 00:05:01.431 "bdev_get_iostat", 00:05:01.431 "bdev_examine", 00:05:01.431 "bdev_wait_for_examine", 00:05:01.431 "bdev_set_options", 00:05:01.431 "notify_get_notifications", 00:05:01.431 "notify_get_types", 00:05:01.431 "accel_get_stats", 00:05:01.431 "accel_set_options", 00:05:01.431 "accel_set_driver", 00:05:01.431 "accel_crypto_key_destroy", 00:05:01.431 "accel_crypto_keys_get", 00:05:01.431 "accel_crypto_key_create", 00:05:01.431 "accel_assign_opc", 00:05:01.431 "accel_get_module_info", 00:05:01.431 "accel_get_opc_assignments", 00:05:01.431 "vmd_rescan", 00:05:01.431 "vmd_remove_device", 00:05:01.431 "vmd_enable", 00:05:01.431 "sock_set_default_impl", 00:05:01.431 "sock_impl_set_options", 00:05:01.431 "sock_impl_get_options", 00:05:01.431 "iobuf_get_stats", 00:05:01.431 "iobuf_set_options", 00:05:01.431 "framework_get_pci_devices", 00:05:01.431 "framework_get_config", 00:05:01.431 "framework_get_subsystems", 00:05:01.431 "trace_get_info", 00:05:01.431 "trace_get_tpoint_group_mask", 00:05:01.431 "trace_disable_tpoint_group", 00:05:01.431 "trace_enable_tpoint_group", 00:05:01.431 "trace_clear_tpoint_mask", 00:05:01.431 "trace_set_tpoint_mask", 00:05:01.431 "spdk_get_version", 00:05:01.431 "rpc_get_methods" 00:05:01.431 ] 00:05:01.431 16:14:21 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:01.431 16:14:21 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:01.431 16:14:21 -- common/autotest_common.sh@10 -- # set +x 00:05:01.431 16:14:21 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:01.431 16:14:21 -- spdkcli/tcp.sh@38 -- # killprocess 56628 00:05:01.431 16:14:21 -- common/autotest_common.sh@936 -- # '[' -z 56628 ']' 00:05:01.431 16:14:21 -- common/autotest_common.sh@940 -- # kill -0 56628 00:05:01.431 16:14:21 -- common/autotest_common.sh@941 -- # uname 00:05:01.431 16:14:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:01.431 16:14:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56628 00:05:01.431 killing process with pid 56628 00:05:01.431 16:14:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:01.431 16:14:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:01.431 16:14:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56628' 00:05:01.431 16:14:21 -- common/autotest_common.sh@955 -- # kill 56628 00:05:01.431 16:14:21 -- common/autotest_common.sh@960 -- # wait 56628 00:05:03.342 00:05:03.342 real 0m2.792s 00:05:03.342 user 0m4.912s 00:05:03.342 sys 0m0.393s 00:05:03.342 16:14:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:03.342 16:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:03.342 ************************************ 00:05:03.342 END TEST spdkcli_tcp 00:05:03.342 ************************************ 00:05:03.342 16:14:22 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.342 16:14:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:03.342 16:14:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:03.342 16:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:03.342 ************************************ 00:05:03.342 START TEST dpdk_mem_utility 00:05:03.342 ************************************ 00:05:03.342 16:14:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:03.342 * Looking for test storage... 00:05:03.342 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:03.342 16:14:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:03.342 16:14:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:03.342 16:14:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:03.342 16:14:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:03.342 16:14:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:03.342 16:14:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:03.342 16:14:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:03.342 16:14:22 -- scripts/common.sh@335 -- # IFS=.-: 00:05:03.342 16:14:22 -- scripts/common.sh@335 -- # read -ra ver1 00:05:03.342 16:14:22 -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.342 16:14:22 -- scripts/common.sh@336 -- # read -ra ver2 00:05:03.342 16:14:22 -- scripts/common.sh@337 -- # local 'op=<' 00:05:03.342 16:14:22 -- scripts/common.sh@339 -- # ver1_l=2 00:05:03.342 16:14:22 -- scripts/common.sh@340 -- # ver2_l=1 00:05:03.342 16:14:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:03.342 16:14:22 -- scripts/common.sh@343 -- # case "$op" in 00:05:03.342 16:14:22 -- scripts/common.sh@344 -- # : 1 00:05:03.342 16:14:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:03.342 16:14:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.342 16:14:22 -- scripts/common.sh@364 -- # decimal 1 00:05:03.342 16:14:22 -- scripts/common.sh@352 -- # local d=1 00:05:03.342 16:14:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.342 16:14:22 -- scripts/common.sh@354 -- # echo 1 00:05:03.342 16:14:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:03.342 16:14:22 -- scripts/common.sh@365 -- # decimal 2 00:05:03.342 16:14:22 -- scripts/common.sh@352 -- # local d=2 00:05:03.342 16:14:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.342 16:14:22 -- scripts/common.sh@354 -- # echo 2 00:05:03.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.342 16:14:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:03.342 16:14:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:03.342 16:14:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:03.342 16:14:22 -- scripts/common.sh@367 -- # return 0 00:05:03.342 16:14:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.342 16:14:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:03.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.342 --rc genhtml_branch_coverage=1 00:05:03.342 --rc genhtml_function_coverage=1 00:05:03.342 --rc genhtml_legend=1 00:05:03.342 --rc geninfo_all_blocks=1 00:05:03.342 --rc geninfo_unexecuted_blocks=1 00:05:03.342 00:05:03.342 ' 00:05:03.342 16:14:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:03.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.342 --rc genhtml_branch_coverage=1 00:05:03.342 --rc genhtml_function_coverage=1 00:05:03.342 --rc genhtml_legend=1 00:05:03.342 --rc geninfo_all_blocks=1 00:05:03.342 --rc geninfo_unexecuted_blocks=1 00:05:03.342 00:05:03.342 ' 00:05:03.342 16:14:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:03.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.342 --rc genhtml_branch_coverage=1 00:05:03.342 --rc genhtml_function_coverage=1 00:05:03.342 --rc genhtml_legend=1 00:05:03.342 --rc geninfo_all_blocks=1 00:05:03.342 --rc geninfo_unexecuted_blocks=1 00:05:03.342 00:05:03.342 ' 00:05:03.342 16:14:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:03.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.342 --rc genhtml_branch_coverage=1 00:05:03.342 --rc genhtml_function_coverage=1 00:05:03.342 --rc genhtml_legend=1 00:05:03.343 --rc geninfo_all_blocks=1 00:05:03.343 --rc geninfo_unexecuted_blocks=1 00:05:03.343 00:05:03.343 ' 00:05:03.343 16:14:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:03.343 16:14:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=56738 00:05:03.343 16:14:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 56738 00:05:03.343 16:14:22 -- common/autotest_common.sh@829 -- # '[' -z 56738 ']' 00:05:03.343 16:14:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.343 16:14:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.343 16:14:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.343 16:14:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.343 16:14:22 -- common/autotest_common.sh@10 -- # set +x 00:05:03.343 16:14:22 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.343 [2024-11-09 16:14:22.872241] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:03.343 [2024-11-09 16:14:22.872341] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56738 ] 00:05:03.343 [2024-11-09 16:14:23.009637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.600 [2024-11-09 16:14:23.191012] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:03.600 [2024-11-09 16:14:23.191205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.013 16:14:24 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:05.013 16:14:24 -- common/autotest_common.sh@862 -- # return 0 00:05:05.013 16:14:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:05.013 16:14:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:05.013 16:14:24 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:05.013 16:14:24 -- common/autotest_common.sh@10 -- # set +x 00:05:05.013 { 00:05:05.013 "filename": "/tmp/spdk_mem_dump.txt" 00:05:05.013 } 00:05:05.013 16:14:24 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:05.013 16:14:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:05.013 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:05.013 1 heaps totaling size 820.000000 MiB 00:05:05.013 size: 820.000000 MiB heap id: 0 00:05:05.013 end heaps---------- 00:05:05.013 8 mempools totaling size 598.116089 MiB 00:05:05.013 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:05.013 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:05.013 size: 84.521057 MiB name: bdev_io_56738 00:05:05.013 size: 51.011292 MiB name: evtpool_56738 00:05:05.013 size: 50.003479 MiB name: msgpool_56738 00:05:05.013 size: 21.763794 MiB name: PDU_Pool 00:05:05.013 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:05.013 size: 0.026123 MiB name: Session_Pool 00:05:05.013 end mempools------- 00:05:05.013 6 memzones totaling size 4.142822 MiB 00:05:05.013 size: 1.000366 MiB name: RG_ring_0_56738 00:05:05.013 size: 1.000366 MiB name: RG_ring_1_56738 00:05:05.013 size: 1.000366 MiB name: RG_ring_4_56738 00:05:05.013 size: 1.000366 MiB name: RG_ring_5_56738 00:05:05.013 size: 0.125366 MiB name: RG_ring_2_56738 00:05:05.013 size: 0.015991 MiB name: RG_ring_3_56738 00:05:05.013 end memzones------- 00:05:05.013 16:14:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:05.014 heap id: 0 total size: 820.000000 MiB number of busy elements: 303 number of free elements: 18 00:05:05.014 list of free elements. size: 18.450806 MiB 00:05:05.014 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:05.014 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:05.014 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:05.014 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:05.014 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:05.014 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:05.014 element at address: 0x200019600000 with size: 0.999084 MiB 00:05:05.014 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:05.014 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:05.014 element at address: 0x200018e00000 with size: 0.959656 MiB 00:05:05.014 element at address: 0x200019900040 with size: 0.936401 MiB 00:05:05.014 element at address: 0x200000200000 with size: 0.829224 MiB 00:05:05.014 element at address: 0x20001b000000 with size: 0.564392 MiB 00:05:05.014 element at address: 0x200019200000 with size: 0.487976 MiB 00:05:05.014 element at address: 0x200019a00000 with size: 0.485413 MiB 00:05:05.014 element at address: 0x200013800000 with size: 0.467651 MiB 00:05:05.014 element at address: 0x200028400000 with size: 0.390442 MiB 00:05:05.014 element at address: 0x200003a00000 with size: 0.351990 MiB 00:05:05.014 list of standard malloc elements. size: 199.284790 MiB 00:05:05.014 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:05.014 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:05.014 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:05.014 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:05.014 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:05.014 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:05.014 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:05.014 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:05.014 element at address: 0x20000b1ff040 with size: 0.000427 MiB 00:05:05.014 element at address: 0x2000199efdc0 with size: 0.000366 MiB 00:05:05.014 element at address: 0x2000137ff040 with size: 0.000305 MiB 00:05:05.014 element at address: 0x2000002d4480 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d4580 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d4680 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d4780 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d4880 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d4980 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d4a80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d4b80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d4c80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d4d80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d4e80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6100 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5a1c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5a2c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5a3c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5a4c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5a5c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5a6c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5a7c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5a8c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5a9c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5aac0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5abc0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5acc0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5adc0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5aec0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5afc0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5b0c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003a5b1c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ff200 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ff300 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ff400 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ff500 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ff600 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ff700 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ff800 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ff900 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ffa00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ffb00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137ff180 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137ff280 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137ff380 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137ff480 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137ff580 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137ff680 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137ff780 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137ff880 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137ff980 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137ffa80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200013877b80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200013877c80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200013877d80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200013877e80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200013877f80 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200013878080 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200013878180 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200013878280 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200013878380 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200013878480 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200013878580 with size: 0.000244 MiB 00:05:05.014 element at address: 0x2000138f88c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20001927cec0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20001927cfc0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20001927d0c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20001927d1c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20001927d2c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20001927d3c0 with size: 0.000244 MiB 00:05:05.014 element at address: 0x20001927d4c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001927d5c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001927d6c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001927d7c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001927d8c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001927d9c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:05.015 element at address: 0x2000196ffc40 with size: 0.000244 MiB 00:05:05.015 element at address: 0x2000199efbc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x2000199efcc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x200019abc680 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0907c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0908c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0909c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b090ac0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b090bc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b090cc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b090dc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b090ec0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b090fc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0910c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0911c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0912c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0913c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0914c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0915c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0916c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0917c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0918c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0919c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b091ac0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b091bc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b091cc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b091dc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b091ec0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b091fc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0920c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0921c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0922c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0923c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0924c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0925c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0926c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0927c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0928c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0929c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b092ac0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b092bc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b092cc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b092dc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b092ec0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b092fc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0930c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0931c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0932c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0933c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0934c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0935c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0936c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0937c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0938c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0939c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b093ac0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b093bc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b093cc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b093dc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b093ec0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b093fc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0940c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0941c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0942c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0943c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0944c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0945c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0946c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0947c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0948c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0949c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b094ac0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b094bc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b094cc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b094dc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b094ec0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b094fc0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0950c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0951c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0952c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20001b0953c0 with size: 0.000244 MiB 00:05:05.015 element at address: 0x200028463f40 with size: 0.000244 MiB 00:05:05.015 element at address: 0x200028464040 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846ad00 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846af80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846b080 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846b180 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846b280 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846b380 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846b480 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846b580 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846b680 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846b780 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846b880 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846b980 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846ba80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846bb80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846bc80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846bd80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846be80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846bf80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846c080 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846c180 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846c280 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846c380 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846c480 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846c580 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846c680 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846c780 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846c880 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846c980 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846ca80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846cb80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846cc80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846cd80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846ce80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846cf80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846d080 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846d180 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846d280 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846d380 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846d480 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846d580 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846d680 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846d780 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846d880 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846d980 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846da80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846db80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846dc80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846dd80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846de80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846df80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846e080 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846e180 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846e280 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846e380 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846e480 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846e580 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846e680 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846e780 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846e880 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846e980 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846ea80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846eb80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846ec80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846ed80 with size: 0.000244 MiB 00:05:05.015 element at address: 0x20002846ee80 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846ef80 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846f080 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846f180 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846f280 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846f380 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846f480 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846f580 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846f680 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846f780 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846f880 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846f980 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846fa80 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846fb80 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846fc80 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846fd80 with size: 0.000244 MiB 00:05:05.016 element at address: 0x20002846fe80 with size: 0.000244 MiB 00:05:05.016 list of memzone associated elements. size: 602.264404 MiB 00:05:05.016 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:05.016 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:05.016 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:05.016 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:05.016 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:05.016 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_56738_0 00:05:05.016 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:05.016 associated memzone info: size: 48.002930 MiB name: MP_evtpool_56738_0 00:05:05.016 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:05.016 associated memzone info: size: 48.002930 MiB name: MP_msgpool_56738_0 00:05:05.016 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:05.016 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:05.016 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:05.016 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:05.016 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:05.016 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_56738 00:05:05.016 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:05.016 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_56738 00:05:05.016 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:05.016 associated memzone info: size: 1.007996 MiB name: MP_evtpool_56738 00:05:05.016 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:05.016 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:05.016 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:05.016 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:05.016 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:05.016 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:05.016 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:05.016 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:05.016 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:05.016 associated memzone info: size: 1.000366 MiB name: RG_ring_0_56738 00:05:05.016 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:05.016 associated memzone info: size: 1.000366 MiB name: RG_ring_1_56738 00:05:05.016 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:05.016 associated memzone info: size: 1.000366 MiB name: RG_ring_4_56738 00:05:05.016 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:05.016 associated memzone info: size: 1.000366 MiB name: RG_ring_5_56738 00:05:05.016 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:05.016 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_56738 00:05:05.016 element at address: 0x20001927dac0 with size: 0.500549 MiB 00:05:05.016 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:05.016 element at address: 0x200013878680 with size: 0.500549 MiB 00:05:05.016 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:05.016 element at address: 0x200019a7c440 with size: 0.250549 MiB 00:05:05.016 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:05.016 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:05.016 associated memzone info: size: 0.125366 MiB name: RG_ring_2_56738 00:05:05.016 element at address: 0x200018ef5ac0 with size: 0.031799 MiB 00:05:05.016 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:05.016 element at address: 0x200028464140 with size: 0.023804 MiB 00:05:05.016 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:05.016 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:05.016 associated memzone info: size: 0.015991 MiB name: RG_ring_3_56738 00:05:05.016 element at address: 0x20002846a2c0 with size: 0.002502 MiB 00:05:05.016 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:05.016 element at address: 0x2000002d5f80 with size: 0.000366 MiB 00:05:05.016 associated memzone info: size: 0.000183 MiB name: MP_msgpool_56738 00:05:05.016 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:05.016 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_56738 00:05:05.016 element at address: 0x20002846ae00 with size: 0.000366 MiB 00:05:05.016 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:05.016 16:14:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:05.016 16:14:24 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 56738 00:05:05.016 16:14:24 -- common/autotest_common.sh@936 -- # '[' -z 56738 ']' 00:05:05.016 16:14:24 -- common/autotest_common.sh@940 -- # kill -0 56738 00:05:05.016 16:14:24 -- common/autotest_common.sh@941 -- # uname 00:05:05.016 16:14:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:05.016 16:14:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56738 00:05:05.016 16:14:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:05.016 16:14:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:05.016 16:14:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56738' 00:05:05.016 killing process with pid 56738 00:05:05.016 16:14:24 -- common/autotest_common.sh@955 -- # kill 56738 00:05:05.016 16:14:24 -- common/autotest_common.sh@960 -- # wait 56738 00:05:06.392 00:05:06.392 real 0m3.068s 00:05:06.392 user 0m3.143s 00:05:06.392 sys 0m0.445s 00:05:06.392 16:14:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:06.392 ************************************ 00:05:06.392 END TEST dpdk_mem_utility 00:05:06.392 ************************************ 00:05:06.392 16:14:25 -- common/autotest_common.sh@10 -- # set +x 00:05:06.392 16:14:25 -- spdk/autotest.sh@174 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:06.392 16:14:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:06.392 16:14:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.392 16:14:25 -- common/autotest_common.sh@10 -- # set +x 00:05:06.392 ************************************ 00:05:06.392 START TEST event 00:05:06.392 ************************************ 00:05:06.393 16:14:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:06.393 * Looking for test storage... 00:05:06.393 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:06.393 16:14:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:06.393 16:14:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:06.393 16:14:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:06.393 16:14:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:06.393 16:14:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:06.393 16:14:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:06.393 16:14:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:06.393 16:14:25 -- scripts/common.sh@335 -- # IFS=.-: 00:05:06.393 16:14:25 -- scripts/common.sh@335 -- # read -ra ver1 00:05:06.393 16:14:25 -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.393 16:14:25 -- scripts/common.sh@336 -- # read -ra ver2 00:05:06.393 16:14:25 -- scripts/common.sh@337 -- # local 'op=<' 00:05:06.393 16:14:25 -- scripts/common.sh@339 -- # ver1_l=2 00:05:06.393 16:14:25 -- scripts/common.sh@340 -- # ver2_l=1 00:05:06.393 16:14:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:06.393 16:14:25 -- scripts/common.sh@343 -- # case "$op" in 00:05:06.393 16:14:25 -- scripts/common.sh@344 -- # : 1 00:05:06.393 16:14:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:06.393 16:14:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.393 16:14:25 -- scripts/common.sh@364 -- # decimal 1 00:05:06.393 16:14:25 -- scripts/common.sh@352 -- # local d=1 00:05:06.393 16:14:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.393 16:14:25 -- scripts/common.sh@354 -- # echo 1 00:05:06.393 16:14:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:06.393 16:14:25 -- scripts/common.sh@365 -- # decimal 2 00:05:06.393 16:14:25 -- scripts/common.sh@352 -- # local d=2 00:05:06.393 16:14:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.393 16:14:25 -- scripts/common.sh@354 -- # echo 2 00:05:06.393 16:14:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:06.393 16:14:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:06.393 16:14:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:06.393 16:14:25 -- scripts/common.sh@367 -- # return 0 00:05:06.393 16:14:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.393 16:14:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:06.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.393 --rc genhtml_branch_coverage=1 00:05:06.393 --rc genhtml_function_coverage=1 00:05:06.393 --rc genhtml_legend=1 00:05:06.393 --rc geninfo_all_blocks=1 00:05:06.393 --rc geninfo_unexecuted_blocks=1 00:05:06.393 00:05:06.393 ' 00:05:06.393 16:14:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:06.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.393 --rc genhtml_branch_coverage=1 00:05:06.393 --rc genhtml_function_coverage=1 00:05:06.393 --rc genhtml_legend=1 00:05:06.393 --rc geninfo_all_blocks=1 00:05:06.393 --rc geninfo_unexecuted_blocks=1 00:05:06.393 00:05:06.393 ' 00:05:06.393 16:14:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:06.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.393 --rc genhtml_branch_coverage=1 00:05:06.393 --rc genhtml_function_coverage=1 00:05:06.393 --rc genhtml_legend=1 00:05:06.393 --rc geninfo_all_blocks=1 00:05:06.393 --rc geninfo_unexecuted_blocks=1 00:05:06.393 00:05:06.393 ' 00:05:06.393 16:14:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:06.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.393 --rc genhtml_branch_coverage=1 00:05:06.393 --rc genhtml_function_coverage=1 00:05:06.393 --rc genhtml_legend=1 00:05:06.393 --rc geninfo_all_blocks=1 00:05:06.393 --rc geninfo_unexecuted_blocks=1 00:05:06.393 00:05:06.393 ' 00:05:06.393 16:14:25 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:06.393 16:14:25 -- bdev/nbd_common.sh@6 -- # set -e 00:05:06.393 16:14:25 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:06.393 16:14:25 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:06.393 16:14:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:06.393 16:14:25 -- common/autotest_common.sh@10 -- # set +x 00:05:06.393 ************************************ 00:05:06.393 START TEST event_perf 00:05:06.393 ************************************ 00:05:06.393 16:14:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:06.393 Running I/O for 1 seconds...[2024-11-09 16:14:25.944093] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:06.393 [2024-11-09 16:14:25.944302] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56836 ] 00:05:06.393 [2024-11-09 16:14:26.094205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.652 [2024-11-09 16:14:26.281360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.652 Running I/O for 1 seconds...[2024-11-09 16:14:26.281707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.652 [2024-11-09 16:14:26.282080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.652 [2024-11-09 16:14:26.282088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:08.031 00:05:08.031 lcore 0: 183653 00:05:08.031 lcore 1: 183652 00:05:08.031 lcore 2: 183655 00:05:08.031 lcore 3: 183655 00:05:08.031 done. 00:05:08.031 00:05:08.031 real 0m1.641s 00:05:08.031 user 0m4.431s 00:05:08.031 sys 0m0.086s 00:05:08.031 16:14:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:08.031 16:14:27 -- common/autotest_common.sh@10 -- # set +x 00:05:08.031 ************************************ 00:05:08.031 END TEST event_perf 00:05:08.031 ************************************ 00:05:08.031 16:14:27 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:08.031 16:14:27 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:08.031 16:14:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:08.031 16:14:27 -- common/autotest_common.sh@10 -- # set +x 00:05:08.031 ************************************ 00:05:08.031 START TEST event_reactor 00:05:08.031 ************************************ 00:05:08.031 16:14:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:08.031 [2024-11-09 16:14:27.627940] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:08.031 [2024-11-09 16:14:27.628156] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56880 ] 00:05:08.031 [2024-11-09 16:14:27.772948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.289 [2024-11-09 16:14:27.943506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.664 test_start 00:05:09.664 oneshot 00:05:09.664 tick 100 00:05:09.664 tick 100 00:05:09.664 tick 250 00:05:09.664 tick 100 00:05:09.664 tick 100 00:05:09.664 tick 100 00:05:09.664 tick 250 00:05:09.664 tick 500 00:05:09.664 tick 100 00:05:09.664 tick 100 00:05:09.664 tick 250 00:05:09.664 tick 100 00:05:09.664 tick 100 00:05:09.664 test_end 00:05:09.664 00:05:09.664 real 0m1.602s 00:05:09.664 user 0m1.421s 00:05:09.664 sys 0m0.072s 00:05:09.664 16:14:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:09.664 16:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:09.664 ************************************ 00:05:09.664 END TEST event_reactor 00:05:09.664 ************************************ 00:05:09.664 16:14:29 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:09.664 16:14:29 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:09.664 16:14:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.664 16:14:29 -- common/autotest_common.sh@10 -- # set +x 00:05:09.664 ************************************ 00:05:09.664 START TEST event_reactor_perf 00:05:09.664 ************************************ 00:05:09.664 16:14:29 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:09.664 [2024-11-09 16:14:29.276729] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:09.664 [2024-11-09 16:14:29.276928] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56912 ] 00:05:09.665 [2024-11-09 16:14:29.426512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.923 [2024-11-09 16:14:29.598568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.300 test_start 00:05:11.300 test_end 00:05:11.300 Performance: 335176 events per second 00:05:11.300 ************************************ 00:05:11.300 END TEST event_reactor_perf 00:05:11.300 ************************************ 00:05:11.300 00:05:11.300 real 0m1.558s 00:05:11.300 user 0m1.364s 00:05:11.300 sys 0m0.084s 00:05:11.300 16:14:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:11.300 16:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.300 16:14:30 -- event/event.sh@49 -- # uname -s 00:05:11.300 16:14:30 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:11.300 16:14:30 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:11.300 16:14:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.300 16:14:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.300 16:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.300 ************************************ 00:05:11.300 START TEST event_scheduler 00:05:11.300 ************************************ 00:05:11.300 16:14:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:11.300 * Looking for test storage... 00:05:11.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:11.300 16:14:30 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:11.300 16:14:30 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:11.300 16:14:30 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:11.300 16:14:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:11.300 16:14:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:11.300 16:14:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:11.300 16:14:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:11.300 16:14:30 -- scripts/common.sh@335 -- # IFS=.-: 00:05:11.300 16:14:30 -- scripts/common.sh@335 -- # read -ra ver1 00:05:11.300 16:14:30 -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.300 16:14:30 -- scripts/common.sh@336 -- # read -ra ver2 00:05:11.300 16:14:30 -- scripts/common.sh@337 -- # local 'op=<' 00:05:11.300 16:14:30 -- scripts/common.sh@339 -- # ver1_l=2 00:05:11.300 16:14:30 -- scripts/common.sh@340 -- # ver2_l=1 00:05:11.300 16:14:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:11.300 16:14:30 -- scripts/common.sh@343 -- # case "$op" in 00:05:11.300 16:14:30 -- scripts/common.sh@344 -- # : 1 00:05:11.300 16:14:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:11.300 16:14:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.300 16:14:30 -- scripts/common.sh@364 -- # decimal 1 00:05:11.300 16:14:30 -- scripts/common.sh@352 -- # local d=1 00:05:11.300 16:14:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.300 16:14:30 -- scripts/common.sh@354 -- # echo 1 00:05:11.300 16:14:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:11.300 16:14:30 -- scripts/common.sh@365 -- # decimal 2 00:05:11.300 16:14:30 -- scripts/common.sh@352 -- # local d=2 00:05:11.300 16:14:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.300 16:14:30 -- scripts/common.sh@354 -- # echo 2 00:05:11.300 16:14:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:11.300 16:14:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:11.300 16:14:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:11.300 16:14:30 -- scripts/common.sh@367 -- # return 0 00:05:11.300 16:14:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.300 16:14:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:11.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.300 --rc genhtml_branch_coverage=1 00:05:11.300 --rc genhtml_function_coverage=1 00:05:11.300 --rc genhtml_legend=1 00:05:11.300 --rc geninfo_all_blocks=1 00:05:11.300 --rc geninfo_unexecuted_blocks=1 00:05:11.300 00:05:11.300 ' 00:05:11.300 16:14:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:11.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.300 --rc genhtml_branch_coverage=1 00:05:11.300 --rc genhtml_function_coverage=1 00:05:11.300 --rc genhtml_legend=1 00:05:11.300 --rc geninfo_all_blocks=1 00:05:11.300 --rc geninfo_unexecuted_blocks=1 00:05:11.300 00:05:11.300 ' 00:05:11.300 16:14:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:11.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.300 --rc genhtml_branch_coverage=1 00:05:11.300 --rc genhtml_function_coverage=1 00:05:11.300 --rc genhtml_legend=1 00:05:11.300 --rc geninfo_all_blocks=1 00:05:11.300 --rc geninfo_unexecuted_blocks=1 00:05:11.300 00:05:11.300 ' 00:05:11.300 16:14:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:11.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.300 --rc genhtml_branch_coverage=1 00:05:11.300 --rc genhtml_function_coverage=1 00:05:11.300 --rc genhtml_legend=1 00:05:11.300 --rc geninfo_all_blocks=1 00:05:11.300 --rc geninfo_unexecuted_blocks=1 00:05:11.300 00:05:11.300 ' 00:05:11.300 16:14:30 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:11.300 16:14:30 -- scheduler/scheduler.sh@35 -- # scheduler_pid=56987 00:05:11.300 16:14:30 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.300 16:14:30 -- scheduler/scheduler.sh@37 -- # waitforlisten 56987 00:05:11.300 16:14:30 -- common/autotest_common.sh@829 -- # '[' -z 56987 ']' 00:05:11.300 16:14:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.300 16:14:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:11.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.300 16:14:30 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:11.300 16:14:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.300 16:14:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:11.300 16:14:30 -- common/autotest_common.sh@10 -- # set +x 00:05:11.300 [2024-11-09 16:14:31.052786] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:11.300 [2024-11-09 16:14:31.052897] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56987 ] 00:05:11.560 [2024-11-09 16:14:31.202949] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:11.818 [2024-11-09 16:14:31.387848] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.818 [2024-11-09 16:14:31.388482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.818 [2024-11-09 16:14:31.388839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:11.818 [2024-11-09 16:14:31.388906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.387 16:14:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:12.387 16:14:31 -- common/autotest_common.sh@862 -- # return 0 00:05:12.387 16:14:31 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:12.388 16:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.388 16:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:12.388 POWER: Env isn't set yet! 00:05:12.388 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:12.388 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.388 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.388 POWER: Attempting to initialise PSTAT power management... 00:05:12.388 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.388 POWER: Cannot set governor of lcore 0 to performance 00:05:12.388 POWER: Attempting to initialise AMD PSTATE power management... 00:05:12.388 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.388 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.388 POWER: Attempting to initialise CPPC power management... 00:05:12.388 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:12.388 POWER: Cannot set governor of lcore 0 to userspace 00:05:12.388 POWER: Attempting to initialise VM power management... 00:05:12.388 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:12.388 POWER: Unable to set Power Management Environment for lcore 0 00:05:12.388 [2024-11-09 16:14:31.870629] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:05:12.388 [2024-11-09 16:14:31.870645] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:05:12.388 [2024-11-09 16:14:31.870657] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:05:12.388 [2024-11-09 16:14:31.870671] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:12.388 [2024-11-09 16:14:31.870681] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:12.388 [2024-11-09 16:14:31.870689] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:12.388 16:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.388 16:14:31 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:12.388 16:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.388 16:14:31 -- common/autotest_common.sh@10 -- # set +x 00:05:12.388 [2024-11-09 16:14:32.091546] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:12.388 16:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.388 16:14:32 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:12.388 16:14:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:12.388 16:14:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:12.388 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.388 ************************************ 00:05:12.388 START TEST scheduler_create_thread 00:05:12.388 ************************************ 00:05:12.388 16:14:32 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:05:12.388 16:14:32 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:12.388 16:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.388 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.388 2 00:05:12.388 16:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.388 16:14:32 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:12.388 16:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.388 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.388 3 00:05:12.388 16:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.388 16:14:32 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:12.388 16:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.388 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.388 4 00:05:12.388 16:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.388 16:14:32 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:12.388 16:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.388 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.388 5 00:05:12.388 16:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.388 16:14:32 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:12.388 16:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.388 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.388 6 00:05:12.388 16:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.388 16:14:32 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:12.388 16:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.388 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.649 7 00:05:12.649 16:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.649 16:14:32 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:12.649 16:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.649 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.649 8 00:05:12.649 16:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.649 16:14:32 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:12.649 16:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.649 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.649 9 00:05:12.649 16:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.649 16:14:32 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:12.649 16:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.649 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.649 10 00:05:12.649 16:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.649 16:14:32 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:12.649 16:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.649 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.649 16:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.649 16:14:32 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:12.649 16:14:32 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:12.649 16:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.649 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:12.649 16:14:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:12.649 16:14:32 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:12.649 16:14:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:12.649 16:14:32 -- common/autotest_common.sh@10 -- # set +x 00:05:14.024 16:14:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:14.024 16:14:33 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:14.024 16:14:33 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:14.024 16:14:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:14.024 16:14:33 -- common/autotest_common.sh@10 -- # set +x 00:05:15.072 ************************************ 00:05:15.072 END TEST scheduler_create_thread 00:05:15.072 ************************************ 00:05:15.072 16:14:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:15.072 00:05:15.072 real 0m2.617s 00:05:15.072 user 0m0.016s 00:05:15.072 sys 0m0.005s 00:05:15.072 16:14:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:15.072 16:14:34 -- common/autotest_common.sh@10 -- # set +x 00:05:15.072 16:14:34 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:15.072 16:14:34 -- scheduler/scheduler.sh@46 -- # killprocess 56987 00:05:15.072 16:14:34 -- common/autotest_common.sh@936 -- # '[' -z 56987 ']' 00:05:15.072 16:14:34 -- common/autotest_common.sh@940 -- # kill -0 56987 00:05:15.072 16:14:34 -- common/autotest_common.sh@941 -- # uname 00:05:15.072 16:14:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:15.072 16:14:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 56987 00:05:15.072 killing process with pid 56987 00:05:15.072 16:14:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:15.072 16:14:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:15.072 16:14:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 56987' 00:05:15.072 16:14:34 -- common/autotest_common.sh@955 -- # kill 56987 00:05:15.072 16:14:34 -- common/autotest_common.sh@960 -- # wait 56987 00:05:15.639 [2024-11-09 16:14:35.201795] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:16.206 ************************************ 00:05:16.206 END TEST event_scheduler 00:05:16.206 ************************************ 00:05:16.206 00:05:16.206 real 0m4.991s 00:05:16.206 user 0m8.406s 00:05:16.206 sys 0m0.351s 00:05:16.206 16:14:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:16.206 16:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:16.206 16:14:35 -- event/event.sh@51 -- # modprobe -n nbd 00:05:16.206 16:14:35 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:16.206 16:14:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:16.206 16:14:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:16.206 16:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:16.206 ************************************ 00:05:16.206 START TEST app_repeat 00:05:16.206 ************************************ 00:05:16.206 16:14:35 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:05:16.206 16:14:35 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:16.206 16:14:35 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:16.206 16:14:35 -- event/event.sh@13 -- # local nbd_list 00:05:16.206 16:14:35 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:16.206 16:14:35 -- event/event.sh@14 -- # local bdev_list 00:05:16.206 16:14:35 -- event/event.sh@15 -- # local repeat_times=4 00:05:16.206 16:14:35 -- event/event.sh@17 -- # modprobe nbd 00:05:16.206 Process app_repeat pid: 57093 00:05:16.206 spdk_app_start Round 0 00:05:16.206 16:14:35 -- event/event.sh@19 -- # repeat_pid=57093 00:05:16.206 16:14:35 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.206 16:14:35 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 57093' 00:05:16.206 16:14:35 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:16.207 16:14:35 -- event/event.sh@23 -- # for i in {0..2} 00:05:16.207 16:14:35 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:16.207 16:14:35 -- event/event.sh@25 -- # waitforlisten 57093 /var/tmp/spdk-nbd.sock 00:05:16.207 16:14:35 -- common/autotest_common.sh@829 -- # '[' -z 57093 ']' 00:05:16.207 16:14:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:16.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:16.207 16:14:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.207 16:14:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:16.207 16:14:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.207 16:14:35 -- common/autotest_common.sh@10 -- # set +x 00:05:16.207 [2024-11-09 16:14:35.939551] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:16.207 [2024-11-09 16:14:35.939628] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57093 ] 00:05:16.467 [2024-11-09 16:14:36.084045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:16.726 [2024-11-09 16:14:36.259146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.726 [2024-11-09 16:14:36.259210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.297 16:14:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.297 16:14:36 -- common/autotest_common.sh@862 -- # return 0 00:05:17.297 16:14:36 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.297 Malloc0 00:05:17.297 16:14:37 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:17.558 Malloc1 00:05:17.558 16:14:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@12 -- # local i 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.558 16:14:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:17.820 /dev/nbd0 00:05:17.820 16:14:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:17.820 16:14:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:17.820 16:14:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:17.820 16:14:37 -- common/autotest_common.sh@867 -- # local i 00:05:17.820 16:14:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:17.820 16:14:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:17.820 16:14:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:17.820 16:14:37 -- common/autotest_common.sh@871 -- # break 00:05:17.820 16:14:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:17.820 16:14:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:17.820 16:14:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:17.820 1+0 records in 00:05:17.820 1+0 records out 00:05:17.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397081 s, 10.3 MB/s 00:05:17.820 16:14:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.820 16:14:37 -- common/autotest_common.sh@884 -- # size=4096 00:05:17.820 16:14:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:17.820 16:14:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:17.820 16:14:37 -- common/autotest_common.sh@887 -- # return 0 00:05:17.820 16:14:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:17.820 16:14:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:17.820 16:14:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:18.079 /dev/nbd1 00:05:18.079 16:14:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:18.079 16:14:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:18.079 16:14:37 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:18.079 16:14:37 -- common/autotest_common.sh@867 -- # local i 00:05:18.079 16:14:37 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:18.079 16:14:37 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:18.079 16:14:37 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:18.079 16:14:37 -- common/autotest_common.sh@871 -- # break 00:05:18.079 16:14:37 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:18.079 16:14:37 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:18.079 16:14:37 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:18.079 1+0 records in 00:05:18.079 1+0 records out 00:05:18.079 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000519044 s, 7.9 MB/s 00:05:18.079 16:14:37 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.079 16:14:37 -- common/autotest_common.sh@884 -- # size=4096 00:05:18.079 16:14:37 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:18.079 16:14:37 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:18.079 16:14:37 -- common/autotest_common.sh@887 -- # return 0 00:05:18.079 16:14:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:18.079 16:14:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:18.079 16:14:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.079 16:14:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.079 16:14:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:18.338 { 00:05:18.338 "nbd_device": "/dev/nbd0", 00:05:18.338 "bdev_name": "Malloc0" 00:05:18.338 }, 00:05:18.338 { 00:05:18.338 "nbd_device": "/dev/nbd1", 00:05:18.338 "bdev_name": "Malloc1" 00:05:18.338 } 00:05:18.338 ]' 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:18.338 { 00:05:18.338 "nbd_device": "/dev/nbd0", 00:05:18.338 "bdev_name": "Malloc0" 00:05:18.338 }, 00:05:18.338 { 00:05:18.338 "nbd_device": "/dev/nbd1", 00:05:18.338 "bdev_name": "Malloc1" 00:05:18.338 } 00:05:18.338 ]' 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:18.338 /dev/nbd1' 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:18.338 /dev/nbd1' 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@65 -- # count=2 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@95 -- # count=2 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:18.338 256+0 records in 00:05:18.338 256+0 records out 00:05:18.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00766553 s, 137 MB/s 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.338 16:14:37 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:18.338 256+0 records in 00:05:18.338 256+0 records out 00:05:18.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227575 s, 46.1 MB/s 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:18.338 256+0 records in 00:05:18.338 256+0 records out 00:05:18.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238319 s, 44.0 MB/s 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@51 -- # local i 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.338 16:14:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:18.595 16:14:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:18.595 16:14:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:18.595 16:14:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:18.595 16:14:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.595 16:14:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.595 16:14:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:18.595 16:14:38 -- bdev/nbd_common.sh@41 -- # break 00:05:18.595 16:14:38 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.595 16:14:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:18.595 16:14:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:18.853 16:14:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:18.853 16:14:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:18.853 16:14:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:18.853 16:14:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:18.853 16:14:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:18.853 16:14:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:18.853 16:14:38 -- bdev/nbd_common.sh@41 -- # break 00:05:18.853 16:14:38 -- bdev/nbd_common.sh@45 -- # return 0 00:05:18.853 16:14:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:18.853 16:14:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:18.853 16:14:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:19.112 16:14:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:19.112 16:14:38 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:19.112 16:14:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:19.112 16:14:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:19.112 16:14:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:19.112 16:14:38 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:19.112 16:14:38 -- bdev/nbd_common.sh@65 -- # true 00:05:19.112 16:14:38 -- bdev/nbd_common.sh@65 -- # count=0 00:05:19.112 16:14:38 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:19.112 16:14:38 -- bdev/nbd_common.sh@104 -- # count=0 00:05:19.112 16:14:38 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:19.112 16:14:38 -- bdev/nbd_common.sh@109 -- # return 0 00:05:19.112 16:14:38 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:19.371 16:14:38 -- event/event.sh@35 -- # sleep 3 00:05:19.938 [2024-11-09 16:14:39.671670] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:20.196 [2024-11-09 16:14:39.801861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.196 [2024-11-09 16:14:39.801865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.196 [2024-11-09 16:14:39.905520] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:20.196 [2024-11-09 16:14:39.905561] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:22.731 16:14:41 -- event/event.sh@23 -- # for i in {0..2} 00:05:22.731 spdk_app_start Round 1 00:05:22.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:22.731 16:14:41 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:22.731 16:14:41 -- event/event.sh@25 -- # waitforlisten 57093 /var/tmp/spdk-nbd.sock 00:05:22.731 16:14:41 -- common/autotest_common.sh@829 -- # '[' -z 57093 ']' 00:05:22.731 16:14:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:22.731 16:14:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.731 16:14:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:22.731 16:14:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.731 16:14:41 -- common/autotest_common.sh@10 -- # set +x 00:05:22.731 16:14:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:22.731 16:14:42 -- common/autotest_common.sh@862 -- # return 0 00:05:22.731 16:14:42 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.731 Malloc0 00:05:22.731 16:14:42 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:22.989 Malloc1 00:05:22.989 16:14:42 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@12 -- # local i 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:22.989 16:14:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:23.249 /dev/nbd0 00:05:23.249 16:14:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:23.249 16:14:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:23.249 16:14:42 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:23.249 16:14:42 -- common/autotest_common.sh@867 -- # local i 00:05:23.249 16:14:42 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:23.249 16:14:42 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:23.249 16:14:42 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:23.249 16:14:42 -- common/autotest_common.sh@871 -- # break 00:05:23.249 16:14:42 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:23.249 16:14:42 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:23.249 16:14:42 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.249 1+0 records in 00:05:23.249 1+0 records out 00:05:23.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499071 s, 8.2 MB/s 00:05:23.249 16:14:42 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.249 16:14:42 -- common/autotest_common.sh@884 -- # size=4096 00:05:23.249 16:14:42 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.249 16:14:42 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:23.249 16:14:42 -- common/autotest_common.sh@887 -- # return 0 00:05:23.249 16:14:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.249 16:14:42 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.249 16:14:42 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:23.249 /dev/nbd1 00:05:23.249 16:14:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:23.249 16:14:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:23.249 16:14:43 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:23.249 16:14:43 -- common/autotest_common.sh@867 -- # local i 00:05:23.249 16:14:43 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:23.249 16:14:43 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:23.249 16:14:43 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:23.249 16:14:43 -- common/autotest_common.sh@871 -- # break 00:05:23.249 16:14:43 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:23.249 16:14:43 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:23.249 16:14:43 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:23.249 1+0 records in 00:05:23.249 1+0 records out 00:05:23.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245848 s, 16.7 MB/s 00:05:23.249 16:14:43 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.250 16:14:43 -- common/autotest_common.sh@884 -- # size=4096 00:05:23.250 16:14:43 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:23.250 16:14:43 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:23.250 16:14:43 -- common/autotest_common.sh@887 -- # return 0 00:05:23.250 16:14:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:23.250 16:14:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:23.250 16:14:43 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:23.250 16:14:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:23.529 { 00:05:23.529 "nbd_device": "/dev/nbd0", 00:05:23.529 "bdev_name": "Malloc0" 00:05:23.529 }, 00:05:23.529 { 00:05:23.529 "nbd_device": "/dev/nbd1", 00:05:23.529 "bdev_name": "Malloc1" 00:05:23.529 } 00:05:23.529 ]' 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:23.529 { 00:05:23.529 "nbd_device": "/dev/nbd0", 00:05:23.529 "bdev_name": "Malloc0" 00:05:23.529 }, 00:05:23.529 { 00:05:23.529 "nbd_device": "/dev/nbd1", 00:05:23.529 "bdev_name": "Malloc1" 00:05:23.529 } 00:05:23.529 ]' 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:23.529 /dev/nbd1' 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:23.529 /dev/nbd1' 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@65 -- # count=2 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@95 -- # count=2 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:23.529 256+0 records in 00:05:23.529 256+0 records out 00:05:23.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00723475 s, 145 MB/s 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:23.529 256+0 records in 00:05:23.529 256+0 records out 00:05:23.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140669 s, 74.5 MB/s 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:23.529 256+0 records in 00:05:23.529 256+0 records out 00:05:23.529 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169655 s, 61.8 MB/s 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@51 -- # local i 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.529 16:14:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:23.788 16:14:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:23.788 16:14:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:23.788 16:14:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:23.788 16:14:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:23.788 16:14:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:23.788 16:14:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:23.788 16:14:43 -- bdev/nbd_common.sh@41 -- # break 00:05:23.788 16:14:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:23.788 16:14:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:23.788 16:14:43 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:24.046 16:14:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:24.046 16:14:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:24.046 16:14:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:24.046 16:14:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:24.046 16:14:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:24.046 16:14:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:24.046 16:14:43 -- bdev/nbd_common.sh@41 -- # break 00:05:24.046 16:14:43 -- bdev/nbd_common.sh@45 -- # return 0 00:05:24.046 16:14:43 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:24.046 16:14:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.046 16:14:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:24.304 16:14:43 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:24.305 16:14:43 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:24.305 16:14:43 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:24.305 16:14:43 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:24.305 16:14:43 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:24.305 16:14:43 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:24.305 16:14:43 -- bdev/nbd_common.sh@65 -- # true 00:05:24.305 16:14:43 -- bdev/nbd_common.sh@65 -- # count=0 00:05:24.305 16:14:43 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:24.305 16:14:43 -- bdev/nbd_common.sh@104 -- # count=0 00:05:24.305 16:14:43 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:24.305 16:14:43 -- bdev/nbd_common.sh@109 -- # return 0 00:05:24.305 16:14:43 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:24.563 16:14:44 -- event/event.sh@35 -- # sleep 3 00:05:25.129 [2024-11-09 16:14:44.800873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.388 [2024-11-09 16:14:44.930997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.388 [2024-11-09 16:14:44.931007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.388 [2024-11-09 16:14:45.034767] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:25.388 [2024-11-09 16:14:45.034983] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:27.918 spdk_app_start Round 2 00:05:27.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:27.918 16:14:47 -- event/event.sh@23 -- # for i in {0..2} 00:05:27.918 16:14:47 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:27.918 16:14:47 -- event/event.sh@25 -- # waitforlisten 57093 /var/tmp/spdk-nbd.sock 00:05:27.918 16:14:47 -- common/autotest_common.sh@829 -- # '[' -z 57093 ']' 00:05:27.918 16:14:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:27.918 16:14:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:27.918 16:14:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:27.918 16:14:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:27.918 16:14:47 -- common/autotest_common.sh@10 -- # set +x 00:05:27.918 16:14:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.918 16:14:47 -- common/autotest_common.sh@862 -- # return 0 00:05:27.918 16:14:47 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.918 Malloc0 00:05:27.918 16:14:47 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.176 Malloc1 00:05:28.176 16:14:47 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@12 -- # local i 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.176 16:14:47 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.435 /dev/nbd0 00:05:28.435 16:14:47 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.435 16:14:47 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.435 16:14:47 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:28.435 16:14:47 -- common/autotest_common.sh@867 -- # local i 00:05:28.435 16:14:47 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:28.435 16:14:47 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:28.435 16:14:47 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:28.435 16:14:48 -- common/autotest_common.sh@871 -- # break 00:05:28.435 16:14:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:28.435 16:14:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:28.435 16:14:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.435 1+0 records in 00:05:28.435 1+0 records out 00:05:28.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000214622 s, 19.1 MB/s 00:05:28.435 16:14:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.435 16:14:48 -- common/autotest_common.sh@884 -- # size=4096 00:05:28.435 16:14:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.435 16:14:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:28.435 16:14:48 -- common/autotest_common.sh@887 -- # return 0 00:05:28.435 16:14:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.435 16:14:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.435 16:14:48 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.435 /dev/nbd1 00:05:28.435 16:14:48 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.435 16:14:48 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.435 16:14:48 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:28.435 16:14:48 -- common/autotest_common.sh@867 -- # local i 00:05:28.435 16:14:48 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:28.435 16:14:48 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:28.435 16:14:48 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:28.694 16:14:48 -- common/autotest_common.sh@871 -- # break 00:05:28.694 16:14:48 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:28.694 16:14:48 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:28.694 16:14:48 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.694 1+0 records in 00:05:28.694 1+0 records out 00:05:28.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184603 s, 22.2 MB/s 00:05:28.694 16:14:48 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.694 16:14:48 -- common/autotest_common.sh@884 -- # size=4096 00:05:28.694 16:14:48 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.694 16:14:48 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:28.694 16:14:48 -- common/autotest_common.sh@887 -- # return 0 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.694 { 00:05:28.694 "nbd_device": "/dev/nbd0", 00:05:28.694 "bdev_name": "Malloc0" 00:05:28.694 }, 00:05:28.694 { 00:05:28.694 "nbd_device": "/dev/nbd1", 00:05:28.694 "bdev_name": "Malloc1" 00:05:28.694 } 00:05:28.694 ]' 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.694 { 00:05:28.694 "nbd_device": "/dev/nbd0", 00:05:28.694 "bdev_name": "Malloc0" 00:05:28.694 }, 00:05:28.694 { 00:05:28.694 "nbd_device": "/dev/nbd1", 00:05:28.694 "bdev_name": "Malloc1" 00:05:28.694 } 00:05:28.694 ]' 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.694 /dev/nbd1' 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.694 /dev/nbd1' 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.694 256+0 records in 00:05:28.694 256+0 records out 00:05:28.694 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00836264 s, 125 MB/s 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.694 16:14:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.952 256+0 records in 00:05:28.952 256+0 records out 00:05:28.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168404 s, 62.3 MB/s 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.952 256+0 records in 00:05:28.952 256+0 records out 00:05:28.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0165803 s, 63.2 MB/s 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@51 -- # local i 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@41 -- # break 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.952 16:14:48 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.210 16:14:48 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.210 16:14:48 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.210 16:14:48 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.210 16:14:48 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.210 16:14:48 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.210 16:14:48 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.210 16:14:48 -- bdev/nbd_common.sh@41 -- # break 00:05:29.210 16:14:48 -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.210 16:14:48 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.210 16:14:48 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.210 16:14:48 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.468 16:14:49 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.468 16:14:49 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.468 16:14:49 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.468 16:14:49 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.468 16:14:49 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.468 16:14:49 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.468 16:14:49 -- bdev/nbd_common.sh@65 -- # true 00:05:29.468 16:14:49 -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.468 16:14:49 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.468 16:14:49 -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.468 16:14:49 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.468 16:14:49 -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.468 16:14:49 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.727 16:14:49 -- event/event.sh@35 -- # sleep 3 00:05:30.293 [2024-11-09 16:14:50.017998] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.551 [2024-11-09 16:14:50.146135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.551 [2024-11-09 16:14:50.146137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.551 [2024-11-09 16:14:50.250063] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.552 [2024-11-09 16:14:50.250276] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.079 16:14:52 -- event/event.sh@38 -- # waitforlisten 57093 /var/tmp/spdk-nbd.sock 00:05:33.079 16:14:52 -- common/autotest_common.sh@829 -- # '[' -z 57093 ']' 00:05:33.079 16:14:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.079 16:14:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.079 16:14:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.079 16:14:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.079 16:14:52 -- common/autotest_common.sh@10 -- # set +x 00:05:33.079 16:14:52 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:33.079 16:14:52 -- common/autotest_common.sh@862 -- # return 0 00:05:33.079 16:14:52 -- event/event.sh@39 -- # killprocess 57093 00:05:33.079 16:14:52 -- common/autotest_common.sh@936 -- # '[' -z 57093 ']' 00:05:33.079 16:14:52 -- common/autotest_common.sh@940 -- # kill -0 57093 00:05:33.079 16:14:52 -- common/autotest_common.sh@941 -- # uname 00:05:33.079 16:14:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:33.079 16:14:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57093 00:05:33.079 16:14:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:33.079 16:14:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:33.079 16:14:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57093' 00:05:33.079 killing process with pid 57093 00:05:33.079 16:14:52 -- common/autotest_common.sh@955 -- # kill 57093 00:05:33.079 16:14:52 -- common/autotest_common.sh@960 -- # wait 57093 00:05:33.645 spdk_app_start is called in Round 0. 00:05:33.645 Shutdown signal received, stop current app iteration 00:05:33.645 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:33.645 spdk_app_start is called in Round 1. 00:05:33.645 Shutdown signal received, stop current app iteration 00:05:33.645 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:33.645 spdk_app_start is called in Round 2. 00:05:33.645 Shutdown signal received, stop current app iteration 00:05:33.645 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 reinitialization... 00:05:33.645 spdk_app_start is called in Round 3. 00:05:33.645 Shutdown signal received, stop current app iteration 00:05:33.645 16:14:53 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:33.645 ************************************ 00:05:33.645 END TEST app_repeat 00:05:33.645 ************************************ 00:05:33.645 16:14:53 -- event/event.sh@42 -- # return 0 00:05:33.645 00:05:33.646 real 0m17.290s 00:05:33.646 user 0m37.106s 00:05:33.646 sys 0m1.933s 00:05:33.646 16:14:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.646 16:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.646 16:14:53 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:33.646 16:14:53 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:33.646 16:14:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.646 16:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.646 16:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.646 ************************************ 00:05:33.646 START TEST cpu_locks 00:05:33.646 ************************************ 00:05:33.646 16:14:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:33.646 * Looking for test storage... 00:05:33.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:33.646 16:14:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:33.646 16:14:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:33.646 16:14:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:33.646 16:14:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:33.646 16:14:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:33.646 16:14:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:33.646 16:14:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:33.646 16:14:53 -- scripts/common.sh@335 -- # IFS=.-: 00:05:33.646 16:14:53 -- scripts/common.sh@335 -- # read -ra ver1 00:05:33.646 16:14:53 -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.646 16:14:53 -- scripts/common.sh@336 -- # read -ra ver2 00:05:33.646 16:14:53 -- scripts/common.sh@337 -- # local 'op=<' 00:05:33.646 16:14:53 -- scripts/common.sh@339 -- # ver1_l=2 00:05:33.646 16:14:53 -- scripts/common.sh@340 -- # ver2_l=1 00:05:33.646 16:14:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:33.646 16:14:53 -- scripts/common.sh@343 -- # case "$op" in 00:05:33.646 16:14:53 -- scripts/common.sh@344 -- # : 1 00:05:33.646 16:14:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:33.646 16:14:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.646 16:14:53 -- scripts/common.sh@364 -- # decimal 1 00:05:33.646 16:14:53 -- scripts/common.sh@352 -- # local d=1 00:05:33.646 16:14:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.646 16:14:53 -- scripts/common.sh@354 -- # echo 1 00:05:33.646 16:14:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:33.646 16:14:53 -- scripts/common.sh@365 -- # decimal 2 00:05:33.646 16:14:53 -- scripts/common.sh@352 -- # local d=2 00:05:33.646 16:14:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.646 16:14:53 -- scripts/common.sh@354 -- # echo 2 00:05:33.646 16:14:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:33.646 16:14:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:33.646 16:14:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:33.646 16:14:53 -- scripts/common.sh@367 -- # return 0 00:05:33.646 16:14:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.646 16:14:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:33.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.646 --rc genhtml_branch_coverage=1 00:05:33.646 --rc genhtml_function_coverage=1 00:05:33.646 --rc genhtml_legend=1 00:05:33.646 --rc geninfo_all_blocks=1 00:05:33.646 --rc geninfo_unexecuted_blocks=1 00:05:33.646 00:05:33.646 ' 00:05:33.646 16:14:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:33.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.646 --rc genhtml_branch_coverage=1 00:05:33.646 --rc genhtml_function_coverage=1 00:05:33.646 --rc genhtml_legend=1 00:05:33.646 --rc geninfo_all_blocks=1 00:05:33.646 --rc geninfo_unexecuted_blocks=1 00:05:33.646 00:05:33.646 ' 00:05:33.646 16:14:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:33.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.646 --rc genhtml_branch_coverage=1 00:05:33.646 --rc genhtml_function_coverage=1 00:05:33.646 --rc genhtml_legend=1 00:05:33.646 --rc geninfo_all_blocks=1 00:05:33.646 --rc geninfo_unexecuted_blocks=1 00:05:33.646 00:05:33.646 ' 00:05:33.646 16:14:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:33.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.646 --rc genhtml_branch_coverage=1 00:05:33.646 --rc genhtml_function_coverage=1 00:05:33.646 --rc genhtml_legend=1 00:05:33.646 --rc geninfo_all_blocks=1 00:05:33.646 --rc geninfo_unexecuted_blocks=1 00:05:33.646 00:05:33.646 ' 00:05:33.646 16:14:53 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:33.646 16:14:53 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:33.646 16:14:53 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:33.646 16:14:53 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:33.646 16:14:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:33.646 16:14:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:33.646 16:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.646 ************************************ 00:05:33.646 START TEST default_locks 00:05:33.646 ************************************ 00:05:33.646 16:14:53 -- common/autotest_common.sh@1114 -- # default_locks 00:05:33.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.646 16:14:53 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=57512 00:05:33.646 16:14:53 -- event/cpu_locks.sh@47 -- # waitforlisten 57512 00:05:33.646 16:14:53 -- common/autotest_common.sh@829 -- # '[' -z 57512 ']' 00:05:33.646 16:14:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.646 16:14:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:33.646 16:14:53 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.646 16:14:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.646 16:14:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:33.646 16:14:53 -- common/autotest_common.sh@10 -- # set +x 00:05:33.906 [2024-11-09 16:14:53.445368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:33.906 [2024-11-09 16:14:53.445579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57512 ] 00:05:33.906 [2024-11-09 16:14:53.589973] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.164 [2024-11-09 16:14:53.772219] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.164 [2024-11-09 16:14:53.772455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.539 16:14:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.539 16:14:54 -- common/autotest_common.sh@862 -- # return 0 00:05:35.539 16:14:54 -- event/cpu_locks.sh@49 -- # locks_exist 57512 00:05:35.539 16:14:54 -- event/cpu_locks.sh@22 -- # lslocks -p 57512 00:05:35.539 16:14:54 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:35.539 16:14:55 -- event/cpu_locks.sh@50 -- # killprocess 57512 00:05:35.539 16:14:55 -- common/autotest_common.sh@936 -- # '[' -z 57512 ']' 00:05:35.539 16:14:55 -- common/autotest_common.sh@940 -- # kill -0 57512 00:05:35.539 16:14:55 -- common/autotest_common.sh@941 -- # uname 00:05:35.539 16:14:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:35.539 16:14:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57512 00:05:35.539 killing process with pid 57512 00:05:35.539 16:14:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:35.539 16:14:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:35.539 16:14:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57512' 00:05:35.539 16:14:55 -- common/autotest_common.sh@955 -- # kill 57512 00:05:35.539 16:14:55 -- common/autotest_common.sh@960 -- # wait 57512 00:05:36.915 16:14:56 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 57512 00:05:36.915 16:14:56 -- common/autotest_common.sh@650 -- # local es=0 00:05:36.915 16:14:56 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57512 00:05:36.915 16:14:56 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:36.915 16:14:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.915 16:14:56 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:36.915 16:14:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:36.915 16:14:56 -- common/autotest_common.sh@653 -- # waitforlisten 57512 00:05:36.915 16:14:56 -- common/autotest_common.sh@829 -- # '[' -z 57512 ']' 00:05:36.915 16:14:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.915 16:14:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.915 ERROR: process (pid: 57512) is no longer running 00:05:36.915 16:14:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.915 16:14:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.915 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:36.915 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57512) - No such process 00:05:36.915 16:14:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.915 16:14:56 -- common/autotest_common.sh@862 -- # return 1 00:05:36.915 16:14:56 -- common/autotest_common.sh@653 -- # es=1 00:05:36.915 16:14:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:36.915 16:14:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:36.915 16:14:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:36.915 16:14:56 -- event/cpu_locks.sh@54 -- # no_locks 00:05:36.915 16:14:56 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.915 16:14:56 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.915 16:14:56 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.915 00:05:36.915 real 0m2.887s 00:05:36.915 user 0m3.001s 00:05:36.915 sys 0m0.418s 00:05:36.915 ************************************ 00:05:36.915 END TEST default_locks 00:05:36.915 ************************************ 00:05:36.915 16:14:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:36.915 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:36.915 16:14:56 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:36.915 16:14:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:36.915 16:14:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:36.915 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:36.915 ************************************ 00:05:36.915 START TEST default_locks_via_rpc 00:05:36.915 ************************************ 00:05:36.915 16:14:56 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:05:36.915 16:14:56 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=57578 00:05:36.915 16:14:56 -- event/cpu_locks.sh@63 -- # waitforlisten 57578 00:05:36.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.915 16:14:56 -- common/autotest_common.sh@829 -- # '[' -z 57578 ']' 00:05:36.915 16:14:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.915 16:14:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:36.915 16:14:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.915 16:14:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:36.915 16:14:56 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:36.915 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:05:36.915 [2024-11-09 16:14:56.389184] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:36.915 [2024-11-09 16:14:56.389464] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57578 ] 00:05:36.915 [2024-11-09 16:14:56.543490] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.176 [2024-11-09 16:14:56.717778] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:37.176 [2024-11-09 16:14:56.717990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.112 16:14:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.112 16:14:57 -- common/autotest_common.sh@862 -- # return 0 00:05:38.112 16:14:57 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:38.112 16:14:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.112 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:38.112 16:14:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.112 16:14:57 -- event/cpu_locks.sh@67 -- # no_locks 00:05:38.112 16:14:57 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:38.112 16:14:57 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:38.112 16:14:57 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:38.112 16:14:57 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:38.112 16:14:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:38.112 16:14:57 -- common/autotest_common.sh@10 -- # set +x 00:05:38.112 16:14:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:38.112 16:14:57 -- event/cpu_locks.sh@71 -- # locks_exist 57578 00:05:38.112 16:14:57 -- event/cpu_locks.sh@22 -- # lslocks -p 57578 00:05:38.112 16:14:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:38.370 16:14:58 -- event/cpu_locks.sh@73 -- # killprocess 57578 00:05:38.370 16:14:58 -- common/autotest_common.sh@936 -- # '[' -z 57578 ']' 00:05:38.370 16:14:58 -- common/autotest_common.sh@940 -- # kill -0 57578 00:05:38.370 16:14:58 -- common/autotest_common.sh@941 -- # uname 00:05:38.370 16:14:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:38.370 16:14:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57578 00:05:38.370 killing process with pid 57578 00:05:38.370 16:14:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:38.370 16:14:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:38.370 16:14:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57578' 00:05:38.370 16:14:58 -- common/autotest_common.sh@955 -- # kill 57578 00:05:38.370 16:14:58 -- common/autotest_common.sh@960 -- # wait 57578 00:05:39.745 ************************************ 00:05:39.745 END TEST default_locks_via_rpc 00:05:39.745 ************************************ 00:05:39.745 00:05:39.745 real 0m2.994s 00:05:39.745 user 0m3.085s 00:05:39.745 sys 0m0.444s 00:05:39.745 16:14:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:39.745 16:14:59 -- common/autotest_common.sh@10 -- # set +x 00:05:39.745 16:14:59 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:39.745 16:14:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:39.745 16:14:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:39.745 16:14:59 -- common/autotest_common.sh@10 -- # set +x 00:05:39.745 ************************************ 00:05:39.745 START TEST non_locking_app_on_locked_coremask 00:05:39.745 ************************************ 00:05:39.745 16:14:59 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:05:39.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.745 16:14:59 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=57643 00:05:39.745 16:14:59 -- event/cpu_locks.sh@81 -- # waitforlisten 57643 /var/tmp/spdk.sock 00:05:39.745 16:14:59 -- common/autotest_common.sh@829 -- # '[' -z 57643 ']' 00:05:39.745 16:14:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.745 16:14:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.745 16:14:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.745 16:14:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.745 16:14:59 -- common/autotest_common.sh@10 -- # set +x 00:05:39.745 16:14:59 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.745 [2024-11-09 16:14:59.411976] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:39.745 [2024-11-09 16:14:59.412065] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57643 ] 00:05:40.004 [2024-11-09 16:14:59.552867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.004 [2024-11-09 16:14:59.691651] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.004 [2024-11-09 16:14:59.691805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.572 16:15:00 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.572 16:15:00 -- common/autotest_common.sh@862 -- # return 0 00:05:40.572 16:15:00 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=57659 00:05:40.572 16:15:00 -- event/cpu_locks.sh@85 -- # waitforlisten 57659 /var/tmp/spdk2.sock 00:05:40.572 16:15:00 -- common/autotest_common.sh@829 -- # '[' -z 57659 ']' 00:05:40.572 16:15:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.572 16:15:00 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:40.572 16:15:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.572 16:15:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.572 16:15:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.572 16:15:00 -- common/autotest_common.sh@10 -- # set +x 00:05:40.572 [2024-11-09 16:15:00.286315] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:40.572 [2024-11-09 16:15:00.286606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57659 ] 00:05:40.829 [2024-11-09 16:15:00.430247] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:40.829 [2024-11-09 16:15:00.430295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.087 [2024-11-09 16:15:00.717585] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.087 [2024-11-09 16:15:00.717734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.460 16:15:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.460 16:15:01 -- common/autotest_common.sh@862 -- # return 0 00:05:42.460 16:15:01 -- event/cpu_locks.sh@87 -- # locks_exist 57643 00:05:42.460 16:15:01 -- event/cpu_locks.sh@22 -- # lslocks -p 57643 00:05:42.460 16:15:01 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.460 16:15:02 -- event/cpu_locks.sh@89 -- # killprocess 57643 00:05:42.460 16:15:02 -- common/autotest_common.sh@936 -- # '[' -z 57643 ']' 00:05:42.460 16:15:02 -- common/autotest_common.sh@940 -- # kill -0 57643 00:05:42.460 16:15:02 -- common/autotest_common.sh@941 -- # uname 00:05:42.460 16:15:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.460 16:15:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57643 00:05:42.460 killing process with pid 57643 00:05:42.460 16:15:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.460 16:15:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.460 16:15:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57643' 00:05:42.460 16:15:02 -- common/autotest_common.sh@955 -- # kill 57643 00:05:42.460 16:15:02 -- common/autotest_common.sh@960 -- # wait 57643 00:05:44.990 16:15:04 -- event/cpu_locks.sh@90 -- # killprocess 57659 00:05:44.990 16:15:04 -- common/autotest_common.sh@936 -- # '[' -z 57659 ']' 00:05:44.990 16:15:04 -- common/autotest_common.sh@940 -- # kill -0 57659 00:05:44.990 16:15:04 -- common/autotest_common.sh@941 -- # uname 00:05:44.990 16:15:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:44.990 16:15:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57659 00:05:44.990 killing process with pid 57659 00:05:44.990 16:15:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:44.990 16:15:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:44.990 16:15:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57659' 00:05:44.990 16:15:04 -- common/autotest_common.sh@955 -- # kill 57659 00:05:44.990 16:15:04 -- common/autotest_common.sh@960 -- # wait 57659 00:05:45.925 00:05:45.925 real 0m6.258s 00:05:45.925 user 0m6.623s 00:05:45.925 sys 0m0.779s 00:05:45.925 16:15:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:45.925 16:15:05 -- common/autotest_common.sh@10 -- # set +x 00:05:45.925 ************************************ 00:05:45.925 END TEST non_locking_app_on_locked_coremask 00:05:45.925 ************************************ 00:05:45.925 16:15:05 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:45.925 16:15:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.925 16:15:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.925 16:15:05 -- common/autotest_common.sh@10 -- # set +x 00:05:45.925 ************************************ 00:05:45.925 START TEST locking_app_on_unlocked_coremask 00:05:45.925 ************************************ 00:05:45.925 16:15:05 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:05:45.925 16:15:05 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=57752 00:05:45.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.925 16:15:05 -- event/cpu_locks.sh@99 -- # waitforlisten 57752 /var/tmp/spdk.sock 00:05:45.925 16:15:05 -- common/autotest_common.sh@829 -- # '[' -z 57752 ']' 00:05:45.925 16:15:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.925 16:15:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:45.925 16:15:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.925 16:15:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:45.925 16:15:05 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:45.925 16:15:05 -- common/autotest_common.sh@10 -- # set +x 00:05:46.186 [2024-11-09 16:15:05.745889] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:46.186 [2024-11-09 16:15:05.746003] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57752 ] 00:05:46.186 [2024-11-09 16:15:05.894263] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.186 [2024-11-09 16:15:05.894321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.456 [2024-11-09 16:15:06.039100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:46.456 [2024-11-09 16:15:06.039272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:47.022 16:15:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.022 16:15:06 -- common/autotest_common.sh@862 -- # return 0 00:05:47.022 16:15:06 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=57768 00:05:47.022 16:15:06 -- event/cpu_locks.sh@103 -- # waitforlisten 57768 /var/tmp/spdk2.sock 00:05:47.022 16:15:06 -- common/autotest_common.sh@829 -- # '[' -z 57768 ']' 00:05:47.022 16:15:06 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:47.022 16:15:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:47.022 16:15:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:47.022 16:15:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:47.022 16:15:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:47.022 16:15:06 -- common/autotest_common.sh@10 -- # set +x 00:05:47.022 [2024-11-09 16:15:06.625520] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:47.022 [2024-11-09 16:15:06.625985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57768 ] 00:05:47.022 [2024-11-09 16:15:06.770707] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.588 [2024-11-09 16:15:07.066219] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:47.588 [2024-11-09 16:15:07.066393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.522 16:15:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.522 16:15:08 -- common/autotest_common.sh@862 -- # return 0 00:05:48.522 16:15:08 -- event/cpu_locks.sh@105 -- # locks_exist 57768 00:05:48.522 16:15:08 -- event/cpu_locks.sh@22 -- # lslocks -p 57768 00:05:48.522 16:15:08 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.781 16:15:08 -- event/cpu_locks.sh@107 -- # killprocess 57752 00:05:48.781 16:15:08 -- common/autotest_common.sh@936 -- # '[' -z 57752 ']' 00:05:48.781 16:15:08 -- common/autotest_common.sh@940 -- # kill -0 57752 00:05:48.781 16:15:08 -- common/autotest_common.sh@941 -- # uname 00:05:48.781 16:15:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.781 16:15:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57752 00:05:48.781 killing process with pid 57752 00:05:48.781 16:15:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:48.781 16:15:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:48.781 16:15:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57752' 00:05:48.781 16:15:08 -- common/autotest_common.sh@955 -- # kill 57752 00:05:48.781 16:15:08 -- common/autotest_common.sh@960 -- # wait 57752 00:05:51.312 16:15:10 -- event/cpu_locks.sh@108 -- # killprocess 57768 00:05:51.312 16:15:10 -- common/autotest_common.sh@936 -- # '[' -z 57768 ']' 00:05:51.312 16:15:10 -- common/autotest_common.sh@940 -- # kill -0 57768 00:05:51.312 16:15:10 -- common/autotest_common.sh@941 -- # uname 00:05:51.312 16:15:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:51.312 16:15:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57768 00:05:51.312 killing process with pid 57768 00:05:51.312 16:15:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:51.312 16:15:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:51.312 16:15:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57768' 00:05:51.312 16:15:10 -- common/autotest_common.sh@955 -- # kill 57768 00:05:51.312 16:15:10 -- common/autotest_common.sh@960 -- # wait 57768 00:05:52.690 ************************************ 00:05:52.690 END TEST locking_app_on_unlocked_coremask 00:05:52.690 ************************************ 00:05:52.690 00:05:52.690 real 0m6.346s 00:05:52.690 user 0m6.741s 00:05:52.691 sys 0m0.809s 00:05:52.691 16:15:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:52.691 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:05:52.691 16:15:12 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:52.691 16:15:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:52.691 16:15:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:52.691 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:05:52.691 ************************************ 00:05:52.691 START TEST locking_app_on_locked_coremask 00:05:52.691 ************************************ 00:05:52.691 16:15:12 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:05:52.691 16:15:12 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=57866 00:05:52.691 16:15:12 -- event/cpu_locks.sh@116 -- # waitforlisten 57866 /var/tmp/spdk.sock 00:05:52.691 16:15:12 -- common/autotest_common.sh@829 -- # '[' -z 57866 ']' 00:05:52.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.691 16:15:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.691 16:15:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.691 16:15:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.691 16:15:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.691 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:05:52.691 16:15:12 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.691 [2024-11-09 16:15:12.139168] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:52.691 [2024-11-09 16:15:12.139298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57866 ] 00:05:52.691 [2024-11-09 16:15:12.287607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.956 [2024-11-09 16:15:12.464643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:52.956 [2024-11-09 16:15:12.464988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.899 16:15:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.899 16:15:13 -- common/autotest_common.sh@862 -- # return 0 00:05:53.899 16:15:13 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=57890 00:05:53.900 16:15:13 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 57890 /var/tmp/spdk2.sock 00:05:53.900 16:15:13 -- common/autotest_common.sh@650 -- # local es=0 00:05:53.900 16:15:13 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57890 /var/tmp/spdk2.sock 00:05:53.900 16:15:13 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.900 16:15:13 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:53.900 16:15:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.900 16:15:13 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:53.900 16:15:13 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:53.900 16:15:13 -- common/autotest_common.sh@653 -- # waitforlisten 57890 /var/tmp/spdk2.sock 00:05:53.900 16:15:13 -- common/autotest_common.sh@829 -- # '[' -z 57890 ']' 00:05:53.900 16:15:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.900 16:15:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.900 16:15:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.900 16:15:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.900 16:15:13 -- common/autotest_common.sh@10 -- # set +x 00:05:54.160 [2024-11-09 16:15:13.701917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:54.160 [2024-11-09 16:15:13.702332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57890 ] 00:05:54.160 [2024-11-09 16:15:13.854349] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 57866 has claimed it. 00:05:54.160 [2024-11-09 16:15:13.854416] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:54.730 ERROR: process (pid: 57890) is no longer running 00:05:54.730 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57890) - No such process 00:05:54.730 16:15:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.730 16:15:14 -- common/autotest_common.sh@862 -- # return 1 00:05:54.730 16:15:14 -- common/autotest_common.sh@653 -- # es=1 00:05:54.730 16:15:14 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:54.730 16:15:14 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:54.730 16:15:14 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:54.730 16:15:14 -- event/cpu_locks.sh@122 -- # locks_exist 57866 00:05:54.730 16:15:14 -- event/cpu_locks.sh@22 -- # lslocks -p 57866 00:05:54.730 16:15:14 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.730 16:15:14 -- event/cpu_locks.sh@124 -- # killprocess 57866 00:05:54.730 16:15:14 -- common/autotest_common.sh@936 -- # '[' -z 57866 ']' 00:05:54.730 16:15:14 -- common/autotest_common.sh@940 -- # kill -0 57866 00:05:54.730 16:15:14 -- common/autotest_common.sh@941 -- # uname 00:05:54.990 16:15:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.990 16:15:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57866 00:05:54.990 16:15:14 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.990 16:15:14 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.990 16:15:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57866' 00:05:54.990 killing process with pid 57866 00:05:54.990 16:15:14 -- common/autotest_common.sh@955 -- # kill 57866 00:05:54.990 16:15:14 -- common/autotest_common.sh@960 -- # wait 57866 00:05:56.366 00:05:56.366 real 0m3.727s 00:05:56.366 user 0m4.058s 00:05:56.366 sys 0m0.547s 00:05:56.366 16:15:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:56.366 ************************************ 00:05:56.366 END TEST locking_app_on_locked_coremask 00:05:56.366 ************************************ 00:05:56.366 16:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:56.366 16:15:15 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:56.366 16:15:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:56.366 16:15:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:56.366 16:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:56.366 ************************************ 00:05:56.366 START TEST locking_overlapped_coremask 00:05:56.366 ************************************ 00:05:56.366 16:15:15 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:05:56.366 16:15:15 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:56.366 16:15:15 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=57943 00:05:56.366 16:15:15 -- event/cpu_locks.sh@133 -- # waitforlisten 57943 /var/tmp/spdk.sock 00:05:56.366 16:15:15 -- common/autotest_common.sh@829 -- # '[' -z 57943 ']' 00:05:56.366 16:15:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.366 16:15:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.366 16:15:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.366 16:15:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.366 16:15:15 -- common/autotest_common.sh@10 -- # set +x 00:05:56.366 [2024-11-09 16:15:15.935940] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:56.366 [2024-11-09 16:15:15.936188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57943 ] 00:05:56.366 [2024-11-09 16:15:16.081211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.624 [2024-11-09 16:15:16.222745] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.624 [2024-11-09 16:15:16.222989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.624 [2024-11-09 16:15:16.223076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.624 [2024-11-09 16:15:16.223089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.191 16:15:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.191 16:15:16 -- common/autotest_common.sh@862 -- # return 0 00:05:57.191 16:15:16 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=57961 00:05:57.191 16:15:16 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 57961 /var/tmp/spdk2.sock 00:05:57.191 16:15:16 -- common/autotest_common.sh@650 -- # local es=0 00:05:57.191 16:15:16 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 57961 /var/tmp/spdk2.sock 00:05:57.191 16:15:16 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:57.191 16:15:16 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:57.191 16:15:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.191 16:15:16 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:57.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.191 16:15:16 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:57.191 16:15:16 -- common/autotest_common.sh@653 -- # waitforlisten 57961 /var/tmp/spdk2.sock 00:05:57.191 16:15:16 -- common/autotest_common.sh@829 -- # '[' -z 57961 ']' 00:05:57.191 16:15:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.191 16:15:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.191 16:15:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.191 16:15:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.191 16:15:16 -- common/autotest_common.sh@10 -- # set +x 00:05:57.191 [2024-11-09 16:15:16.816375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:57.191 [2024-11-09 16:15:16.816481] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57961 ] 00:05:57.468 [2024-11-09 16:15:16.971548] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 57943 has claimed it. 00:05:57.468 [2024-11-09 16:15:16.971613] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:57.738 ERROR: process (pid: 57961) is no longer running 00:05:57.738 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 844: kill: (57961) - No such process 00:05:57.738 16:15:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.738 16:15:17 -- common/autotest_common.sh@862 -- # return 1 00:05:57.738 16:15:17 -- common/autotest_common.sh@653 -- # es=1 00:05:57.738 16:15:17 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:57.738 16:15:17 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:57.738 16:15:17 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:57.738 16:15:17 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:57.738 16:15:17 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:57.738 16:15:17 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:57.738 16:15:17 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:57.738 16:15:17 -- event/cpu_locks.sh@141 -- # killprocess 57943 00:05:57.738 16:15:17 -- common/autotest_common.sh@936 -- # '[' -z 57943 ']' 00:05:57.738 16:15:17 -- common/autotest_common.sh@940 -- # kill -0 57943 00:05:57.738 16:15:17 -- common/autotest_common.sh@941 -- # uname 00:05:57.738 16:15:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:57.738 16:15:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 57943 00:05:57.738 killing process with pid 57943 00:05:57.738 16:15:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:57.738 16:15:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:57.738 16:15:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 57943' 00:05:57.738 16:15:17 -- common/autotest_common.sh@955 -- # kill 57943 00:05:57.738 16:15:17 -- common/autotest_common.sh@960 -- # wait 57943 00:05:59.112 00:05:59.112 real 0m2.748s 00:05:59.112 user 0m7.200s 00:05:59.112 sys 0m0.384s 00:05:59.112 16:15:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:59.112 16:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.112 ************************************ 00:05:59.112 END TEST locking_overlapped_coremask 00:05:59.112 ************************************ 00:05:59.112 16:15:18 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:59.112 16:15:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:59.112 16:15:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:59.112 16:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.112 ************************************ 00:05:59.112 START TEST locking_overlapped_coremask_via_rpc 00:05:59.112 ************************************ 00:05:59.112 16:15:18 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:05:59.112 16:15:18 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58014 00:05:59.112 16:15:18 -- event/cpu_locks.sh@149 -- # waitforlisten 58014 /var/tmp/spdk.sock 00:05:59.112 16:15:18 -- common/autotest_common.sh@829 -- # '[' -z 58014 ']' 00:05:59.112 16:15:18 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:59.112 16:15:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.112 16:15:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.112 16:15:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.112 16:15:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.112 16:15:18 -- common/autotest_common.sh@10 -- # set +x 00:05:59.112 [2024-11-09 16:15:18.724647] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.112 [2024-11-09 16:15:18.724746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58014 ] 00:05:59.112 [2024-11-09 16:15:18.869482] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.112 [2024-11-09 16:15:18.869625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.370 [2024-11-09 16:15:19.015564] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:59.370 [2024-11-09 16:15:19.015961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:59.370 [2024-11-09 16:15:19.016301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.370 [2024-11-09 16:15:19.016328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:59.936 16:15:19 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.936 16:15:19 -- common/autotest_common.sh@862 -- # return 0 00:05:59.936 16:15:19 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=58032 00:05:59.936 16:15:19 -- event/cpu_locks.sh@153 -- # waitforlisten 58032 /var/tmp/spdk2.sock 00:05:59.936 16:15:19 -- common/autotest_common.sh@829 -- # '[' -z 58032 ']' 00:05:59.936 16:15:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:59.936 16:15:19 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:59.936 16:15:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.936 16:15:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:59.936 16:15:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.936 16:15:19 -- common/autotest_common.sh@10 -- # set +x 00:05:59.936 [2024-11-09 16:15:19.598515] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:05:59.936 [2024-11-09 16:15:19.598786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58032 ] 00:06:00.194 [2024-11-09 16:15:19.746210] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.194 [2024-11-09 16:15:19.746261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.452 [2024-11-09 16:15:20.056761] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:00.452 [2024-11-09 16:15:20.057312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.452 [2024-11-09 16:15:20.057350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:00.452 [2024-11-09 16:15:20.057150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.387 16:15:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.387 16:15:21 -- common/autotest_common.sh@862 -- # return 0 00:06:01.387 16:15:21 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:01.387 16:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.387 16:15:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.387 16:15:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:01.387 16:15:21 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.387 16:15:21 -- common/autotest_common.sh@650 -- # local es=0 00:06:01.387 16:15:21 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.387 16:15:21 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:01.387 16:15:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.387 16:15:21 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:01.387 16:15:21 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.387 16:15:21 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:01.387 16:15:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:01.387 16:15:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.387 [2024-11-09 16:15:21.116367] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58014 has claimed it. 00:06:01.387 request: 00:06:01.387 { 00:06:01.387 "method": "framework_enable_cpumask_locks", 00:06:01.387 "req_id": 1 00:06:01.387 } 00:06:01.387 Got JSON-RPC error response 00:06:01.387 response: 00:06:01.387 { 00:06:01.387 "code": -32603, 00:06:01.387 "message": "Failed to claim CPU core: 2" 00:06:01.387 } 00:06:01.387 16:15:21 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:01.387 16:15:21 -- common/autotest_common.sh@653 -- # es=1 00:06:01.387 16:15:21 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.387 16:15:21 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:01.387 16:15:21 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.387 16:15:21 -- event/cpu_locks.sh@158 -- # waitforlisten 58014 /var/tmp/spdk.sock 00:06:01.387 16:15:21 -- common/autotest_common.sh@829 -- # '[' -z 58014 ']' 00:06:01.387 16:15:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.387 16:15:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.387 16:15:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.387 16:15:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.387 16:15:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.645 16:15:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.645 16:15:21 -- common/autotest_common.sh@862 -- # return 0 00:06:01.645 16:15:21 -- event/cpu_locks.sh@159 -- # waitforlisten 58032 /var/tmp/spdk2.sock 00:06:01.645 16:15:21 -- common/autotest_common.sh@829 -- # '[' -z 58032 ']' 00:06:01.645 16:15:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.645 16:15:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:01.645 16:15:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.645 16:15:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:01.645 16:15:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.903 ************************************ 00:06:01.903 END TEST locking_overlapped_coremask_via_rpc 00:06:01.903 ************************************ 00:06:01.903 16:15:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:01.903 16:15:21 -- common/autotest_common.sh@862 -- # return 0 00:06:01.903 16:15:21 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:01.903 16:15:21 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:01.903 16:15:21 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:01.903 16:15:21 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:01.903 00:06:01.903 real 0m2.860s 00:06:01.903 user 0m1.161s 00:06:01.903 sys 0m0.128s 00:06:01.903 16:15:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.903 16:15:21 -- common/autotest_common.sh@10 -- # set +x 00:06:01.903 16:15:21 -- event/cpu_locks.sh@174 -- # cleanup 00:06:01.903 16:15:21 -- event/cpu_locks.sh@15 -- # [[ -z 58014 ]] 00:06:01.903 16:15:21 -- event/cpu_locks.sh@15 -- # killprocess 58014 00:06:01.904 16:15:21 -- common/autotest_common.sh@936 -- # '[' -z 58014 ']' 00:06:01.904 16:15:21 -- common/autotest_common.sh@940 -- # kill -0 58014 00:06:01.904 16:15:21 -- common/autotest_common.sh@941 -- # uname 00:06:01.904 16:15:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:01.904 16:15:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58014 00:06:01.904 killing process with pid 58014 00:06:01.904 16:15:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:01.904 16:15:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:01.904 16:15:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58014' 00:06:01.904 16:15:21 -- common/autotest_common.sh@955 -- # kill 58014 00:06:01.904 16:15:21 -- common/autotest_common.sh@960 -- # wait 58014 00:06:03.277 16:15:22 -- event/cpu_locks.sh@16 -- # [[ -z 58032 ]] 00:06:03.277 16:15:22 -- event/cpu_locks.sh@16 -- # killprocess 58032 00:06:03.277 16:15:22 -- common/autotest_common.sh@936 -- # '[' -z 58032 ']' 00:06:03.277 16:15:22 -- common/autotest_common.sh@940 -- # kill -0 58032 00:06:03.277 16:15:22 -- common/autotest_common.sh@941 -- # uname 00:06:03.277 16:15:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:03.277 16:15:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58032 00:06:03.277 killing process with pid 58032 00:06:03.277 16:15:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:03.277 16:15:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:03.277 16:15:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58032' 00:06:03.277 16:15:22 -- common/autotest_common.sh@955 -- # kill 58032 00:06:03.277 16:15:22 -- common/autotest_common.sh@960 -- # wait 58032 00:06:04.653 16:15:24 -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.653 16:15:24 -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.653 16:15:24 -- event/cpu_locks.sh@15 -- # [[ -z 58014 ]] 00:06:04.653 16:15:24 -- event/cpu_locks.sh@15 -- # killprocess 58014 00:06:04.653 16:15:24 -- common/autotest_common.sh@936 -- # '[' -z 58014 ']' 00:06:04.653 Process with pid 58014 is not found 00:06:04.653 Process with pid 58032 is not found 00:06:04.653 16:15:24 -- common/autotest_common.sh@940 -- # kill -0 58014 00:06:04.653 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58014) - No such process 00:06:04.653 16:15:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58014 is not found' 00:06:04.653 16:15:24 -- event/cpu_locks.sh@16 -- # [[ -z 58032 ]] 00:06:04.653 16:15:24 -- event/cpu_locks.sh@16 -- # killprocess 58032 00:06:04.653 16:15:24 -- common/autotest_common.sh@936 -- # '[' -z 58032 ']' 00:06:04.653 16:15:24 -- common/autotest_common.sh@940 -- # kill -0 58032 00:06:04.653 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (58032) - No such process 00:06:04.653 16:15:24 -- common/autotest_common.sh@963 -- # echo 'Process with pid 58032 is not found' 00:06:04.653 16:15:24 -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.653 ************************************ 00:06:04.653 END TEST cpu_locks 00:06:04.653 ************************************ 00:06:04.653 00:06:04.653 real 0m30.783s 00:06:04.653 user 0m51.504s 00:06:04.653 sys 0m4.291s 00:06:04.653 16:15:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.653 16:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:04.653 ************************************ 00:06:04.653 END TEST event 00:06:04.653 ************************************ 00:06:04.653 00:06:04.653 real 0m58.295s 00:06:04.653 user 1m44.398s 00:06:04.653 sys 0m7.036s 00:06:04.653 16:15:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:04.653 16:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:04.653 16:15:24 -- spdk/autotest.sh@175 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:04.653 16:15:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.653 16:15:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.653 16:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:04.653 ************************************ 00:06:04.653 START TEST thread 00:06:04.653 ************************************ 00:06:04.653 16:15:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:04.654 * Looking for test storage... 00:06:04.654 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:04.654 16:15:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:04.654 16:15:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:04.654 16:15:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:04.654 16:15:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:04.654 16:15:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:04.654 16:15:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:04.654 16:15:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:04.654 16:15:24 -- scripts/common.sh@335 -- # IFS=.-: 00:06:04.654 16:15:24 -- scripts/common.sh@335 -- # read -ra ver1 00:06:04.654 16:15:24 -- scripts/common.sh@336 -- # IFS=.-: 00:06:04.654 16:15:24 -- scripts/common.sh@336 -- # read -ra ver2 00:06:04.654 16:15:24 -- scripts/common.sh@337 -- # local 'op=<' 00:06:04.654 16:15:24 -- scripts/common.sh@339 -- # ver1_l=2 00:06:04.654 16:15:24 -- scripts/common.sh@340 -- # ver2_l=1 00:06:04.654 16:15:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:04.654 16:15:24 -- scripts/common.sh@343 -- # case "$op" in 00:06:04.654 16:15:24 -- scripts/common.sh@344 -- # : 1 00:06:04.654 16:15:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:04.654 16:15:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:04.654 16:15:24 -- scripts/common.sh@364 -- # decimal 1 00:06:04.654 16:15:24 -- scripts/common.sh@352 -- # local d=1 00:06:04.654 16:15:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:04.654 16:15:24 -- scripts/common.sh@354 -- # echo 1 00:06:04.654 16:15:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:04.654 16:15:24 -- scripts/common.sh@365 -- # decimal 2 00:06:04.654 16:15:24 -- scripts/common.sh@352 -- # local d=2 00:06:04.654 16:15:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:04.654 16:15:24 -- scripts/common.sh@354 -- # echo 2 00:06:04.654 16:15:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:04.654 16:15:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:04.654 16:15:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:04.654 16:15:24 -- scripts/common.sh@367 -- # return 0 00:06:04.654 16:15:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:04.654 16:15:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:04.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.654 --rc genhtml_branch_coverage=1 00:06:04.654 --rc genhtml_function_coverage=1 00:06:04.654 --rc genhtml_legend=1 00:06:04.654 --rc geninfo_all_blocks=1 00:06:04.654 --rc geninfo_unexecuted_blocks=1 00:06:04.654 00:06:04.654 ' 00:06:04.654 16:15:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:04.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.654 --rc genhtml_branch_coverage=1 00:06:04.654 --rc genhtml_function_coverage=1 00:06:04.654 --rc genhtml_legend=1 00:06:04.654 --rc geninfo_all_blocks=1 00:06:04.654 --rc geninfo_unexecuted_blocks=1 00:06:04.654 00:06:04.654 ' 00:06:04.654 16:15:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:04.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.654 --rc genhtml_branch_coverage=1 00:06:04.654 --rc genhtml_function_coverage=1 00:06:04.654 --rc genhtml_legend=1 00:06:04.654 --rc geninfo_all_blocks=1 00:06:04.654 --rc geninfo_unexecuted_blocks=1 00:06:04.654 00:06:04.654 ' 00:06:04.654 16:15:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:04.654 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:04.654 --rc genhtml_branch_coverage=1 00:06:04.654 --rc genhtml_function_coverage=1 00:06:04.654 --rc genhtml_legend=1 00:06:04.654 --rc geninfo_all_blocks=1 00:06:04.654 --rc geninfo_unexecuted_blocks=1 00:06:04.654 00:06:04.654 ' 00:06:04.654 16:15:24 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.654 16:15:24 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:04.654 16:15:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.654 16:15:24 -- common/autotest_common.sh@10 -- # set +x 00:06:04.654 ************************************ 00:06:04.654 START TEST thread_poller_perf 00:06:04.654 ************************************ 00:06:04.654 16:15:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.654 [2024-11-09 16:15:24.277343] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:04.654 [2024-11-09 16:15:24.277550] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58182 ] 00:06:04.913 [2024-11-09 16:15:24.427305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.913 [2024-11-09 16:15:24.603324] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.913 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:06.293 [2024-11-09T16:15:26.063Z] ====================================== 00:06:06.293 [2024-11-09T16:15:26.063Z] busy:2612708676 (cyc) 00:06:06.293 [2024-11-09T16:15:26.063Z] total_run_count: 294000 00:06:06.293 [2024-11-09T16:15:26.063Z] tsc_hz: 2600000000 (cyc) 00:06:06.293 [2024-11-09T16:15:26.063Z] ====================================== 00:06:06.293 [2024-11-09T16:15:26.063Z] poller_cost: 8886 (cyc), 3417 (nsec) 00:06:06.293 00:06:06.293 real 0m1.650s 00:06:06.293 user 0m1.463s 00:06:06.293 sys 0m0.077s 00:06:06.293 16:15:25 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:06.293 16:15:25 -- common/autotest_common.sh@10 -- # set +x 00:06:06.293 ************************************ 00:06:06.293 END TEST thread_poller_perf 00:06:06.293 ************************************ 00:06:06.293 16:15:25 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.293 16:15:25 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:06.293 16:15:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:06.293 16:15:25 -- common/autotest_common.sh@10 -- # set +x 00:06:06.293 ************************************ 00:06:06.293 START TEST thread_poller_perf 00:06:06.293 ************************************ 00:06:06.293 16:15:25 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.293 [2024-11-09 16:15:25.982573] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:06.293 [2024-11-09 16:15:25.982806] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58224 ] 00:06:06.552 [2024-11-09 16:15:26.128090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.813 [2024-11-09 16:15:26.336707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.813 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:08.197 [2024-11-09T16:15:27.967Z] ====================================== 00:06:08.197 [2024-11-09T16:15:27.967Z] busy:2604902146 (cyc) 00:06:08.197 [2024-11-09T16:15:27.967Z] total_run_count: 3976000 00:06:08.197 [2024-11-09T16:15:27.967Z] tsc_hz: 2600000000 (cyc) 00:06:08.197 [2024-11-09T16:15:27.967Z] ====================================== 00:06:08.197 [2024-11-09T16:15:27.967Z] poller_cost: 655 (cyc), 251 (nsec) 00:06:08.197 00:06:08.197 real 0m1.643s 00:06:08.197 user 0m1.455s 00:06:08.197 sys 0m0.079s 00:06:08.197 16:15:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.197 16:15:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.197 ************************************ 00:06:08.197 END TEST thread_poller_perf 00:06:08.197 ************************************ 00:06:08.197 16:15:27 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:08.197 00:06:08.197 real 0m3.532s 00:06:08.197 user 0m3.023s 00:06:08.197 sys 0m0.273s 00:06:08.197 16:15:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.197 ************************************ 00:06:08.197 END TEST thread 00:06:08.197 ************************************ 00:06:08.197 16:15:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.197 16:15:27 -- spdk/autotest.sh@176 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:08.197 16:15:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.197 16:15:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.197 16:15:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.197 ************************************ 00:06:08.197 START TEST accel 00:06:08.197 ************************************ 00:06:08.197 16:15:27 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:08.197 * Looking for test storage... 00:06:08.197 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:08.197 16:15:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:08.197 16:15:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:08.197 16:15:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:08.197 16:15:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:08.197 16:15:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:08.197 16:15:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:08.197 16:15:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:08.197 16:15:27 -- scripts/common.sh@335 -- # IFS=.-: 00:06:08.197 16:15:27 -- scripts/common.sh@335 -- # read -ra ver1 00:06:08.197 16:15:27 -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.197 16:15:27 -- scripts/common.sh@336 -- # read -ra ver2 00:06:08.197 16:15:27 -- scripts/common.sh@337 -- # local 'op=<' 00:06:08.197 16:15:27 -- scripts/common.sh@339 -- # ver1_l=2 00:06:08.197 16:15:27 -- scripts/common.sh@340 -- # ver2_l=1 00:06:08.197 16:15:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:08.197 16:15:27 -- scripts/common.sh@343 -- # case "$op" in 00:06:08.197 16:15:27 -- scripts/common.sh@344 -- # : 1 00:06:08.197 16:15:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:08.197 16:15:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.197 16:15:27 -- scripts/common.sh@364 -- # decimal 1 00:06:08.197 16:15:27 -- scripts/common.sh@352 -- # local d=1 00:06:08.197 16:15:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.197 16:15:27 -- scripts/common.sh@354 -- # echo 1 00:06:08.197 16:15:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:08.197 16:15:27 -- scripts/common.sh@365 -- # decimal 2 00:06:08.197 16:15:27 -- scripts/common.sh@352 -- # local d=2 00:06:08.197 16:15:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.197 16:15:27 -- scripts/common.sh@354 -- # echo 2 00:06:08.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.197 16:15:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:08.197 16:15:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:08.197 16:15:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:08.197 16:15:27 -- scripts/common.sh@367 -- # return 0 00:06:08.197 16:15:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.197 16:15:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:08.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.197 --rc genhtml_branch_coverage=1 00:06:08.197 --rc genhtml_function_coverage=1 00:06:08.197 --rc genhtml_legend=1 00:06:08.197 --rc geninfo_all_blocks=1 00:06:08.197 --rc geninfo_unexecuted_blocks=1 00:06:08.197 00:06:08.197 ' 00:06:08.197 16:15:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:08.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.197 --rc genhtml_branch_coverage=1 00:06:08.197 --rc genhtml_function_coverage=1 00:06:08.197 --rc genhtml_legend=1 00:06:08.197 --rc geninfo_all_blocks=1 00:06:08.197 --rc geninfo_unexecuted_blocks=1 00:06:08.197 00:06:08.197 ' 00:06:08.197 16:15:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:08.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.197 --rc genhtml_branch_coverage=1 00:06:08.197 --rc genhtml_function_coverage=1 00:06:08.197 --rc genhtml_legend=1 00:06:08.197 --rc geninfo_all_blocks=1 00:06:08.197 --rc geninfo_unexecuted_blocks=1 00:06:08.197 00:06:08.197 ' 00:06:08.197 16:15:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:08.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.197 --rc genhtml_branch_coverage=1 00:06:08.197 --rc genhtml_function_coverage=1 00:06:08.197 --rc genhtml_legend=1 00:06:08.197 --rc geninfo_all_blocks=1 00:06:08.197 --rc geninfo_unexecuted_blocks=1 00:06:08.197 00:06:08.197 ' 00:06:08.197 16:15:27 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:08.197 16:15:27 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:08.197 16:15:27 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:08.197 16:15:27 -- accel/accel.sh@59 -- # spdk_tgt_pid=58312 00:06:08.197 16:15:27 -- accel/accel.sh@60 -- # waitforlisten 58312 00:06:08.197 16:15:27 -- common/autotest_common.sh@829 -- # '[' -z 58312 ']' 00:06:08.197 16:15:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.197 16:15:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.197 16:15:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.197 16:15:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.197 16:15:27 -- common/autotest_common.sh@10 -- # set +x 00:06:08.198 16:15:27 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:08.198 16:15:27 -- accel/accel.sh@58 -- # build_accel_config 00:06:08.198 16:15:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.198 16:15:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.198 16:15:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.198 16:15:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.198 16:15:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.198 16:15:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.198 16:15:27 -- accel/accel.sh@42 -- # jq -r . 00:06:08.198 [2024-11-09 16:15:27.890117] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:08.198 [2024-11-09 16:15:27.890696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58312 ] 00:06:08.455 [2024-11-09 16:15:28.039375] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.455 [2024-11-09 16:15:28.213190] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.455 [2024-11-09 16:15:28.213919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.827 16:15:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.827 16:15:29 -- common/autotest_common.sh@862 -- # return 0 00:06:09.827 16:15:29 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:09.827 16:15:29 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:09.827 16:15:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.827 16:15:29 -- common/autotest_common.sh@10 -- # set +x 00:06:09.827 16:15:29 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:09.827 16:15:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:09.827 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.827 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.827 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.827 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.827 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.827 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.827 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.827 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.827 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.827 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.827 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.828 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.828 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.828 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.828 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.828 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.828 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.828 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.828 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.828 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.828 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.828 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.828 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.828 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.828 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.828 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.828 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.828 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.828 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.828 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.828 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.828 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.828 16:15:29 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # IFS== 00:06:09.828 16:15:29 -- accel/accel.sh@64 -- # read -r opc module 00:06:09.828 16:15:29 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:09.828 16:15:29 -- accel/accel.sh@67 -- # killprocess 58312 00:06:09.828 16:15:29 -- common/autotest_common.sh@936 -- # '[' -z 58312 ']' 00:06:09.828 16:15:29 -- common/autotest_common.sh@940 -- # kill -0 58312 00:06:09.828 16:15:29 -- common/autotest_common.sh@941 -- # uname 00:06:09.828 16:15:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:09.828 16:15:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 58312 00:06:09.828 killing process with pid 58312 00:06:09.828 16:15:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:09.828 16:15:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:09.828 16:15:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 58312' 00:06:09.828 16:15:29 -- common/autotest_common.sh@955 -- # kill 58312 00:06:09.828 16:15:29 -- common/autotest_common.sh@960 -- # wait 58312 00:06:11.211 16:15:30 -- accel/accel.sh@68 -- # trap - ERR 00:06:11.211 16:15:30 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:11.211 16:15:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:11.211 16:15:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.211 16:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:11.211 16:15:30 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:06:11.211 16:15:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:11.211 16:15:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.211 16:15:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.211 16:15:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.211 16:15:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.211 16:15:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.211 16:15:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.211 16:15:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.211 16:15:30 -- accel/accel.sh@42 -- # jq -r . 00:06:11.211 16:15:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.211 16:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:11.211 16:15:30 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:11.211 16:15:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:11.211 16:15:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.211 16:15:30 -- common/autotest_common.sh@10 -- # set +x 00:06:11.211 ************************************ 00:06:11.211 START TEST accel_missing_filename 00:06:11.211 ************************************ 00:06:11.211 16:15:30 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:06:11.211 16:15:30 -- common/autotest_common.sh@650 -- # local es=0 00:06:11.211 16:15:30 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:11.211 16:15:30 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:11.211 16:15:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.211 16:15:30 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:11.211 16:15:30 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:11.211 16:15:30 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:06:11.211 16:15:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:11.211 16:15:30 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.211 16:15:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:11.211 16:15:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.211 16:15:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.211 16:15:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:11.211 16:15:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:11.211 16:15:30 -- accel/accel.sh@41 -- # local IFS=, 00:06:11.211 16:15:30 -- accel/accel.sh@42 -- # jq -r . 00:06:11.211 [2024-11-09 16:15:30.944133] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:11.211 [2024-11-09 16:15:30.944244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58384 ] 00:06:11.470 [2024-11-09 16:15:31.091389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.727 [2024-11-09 16:15:31.258782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.727 [2024-11-09 16:15:31.397477] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:11.985 [2024-11-09 16:15:31.723960] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:12.243 A filename is required. 00:06:12.243 16:15:31 -- common/autotest_common.sh@653 -- # es=234 00:06:12.243 16:15:31 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.243 ************************************ 00:06:12.243 END TEST accel_missing_filename 00:06:12.243 ************************************ 00:06:12.243 16:15:31 -- common/autotest_common.sh@662 -- # es=106 00:06:12.243 16:15:31 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:12.243 16:15:31 -- common/autotest_common.sh@670 -- # es=1 00:06:12.243 16:15:31 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.243 00:06:12.243 real 0m1.080s 00:06:12.243 user 0m0.877s 00:06:12.243 sys 0m0.128s 00:06:12.243 16:15:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:12.243 16:15:31 -- common/autotest_common.sh@10 -- # set +x 00:06:12.502 16:15:32 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:12.502 16:15:32 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:12.502 16:15:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:12.502 16:15:32 -- common/autotest_common.sh@10 -- # set +x 00:06:12.502 ************************************ 00:06:12.502 START TEST accel_compress_verify 00:06:12.502 ************************************ 00:06:12.502 16:15:32 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:12.502 16:15:32 -- common/autotest_common.sh@650 -- # local es=0 00:06:12.502 16:15:32 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:12.502 16:15:32 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:12.502 16:15:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.502 16:15:32 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:12.502 16:15:32 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.502 16:15:32 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:12.502 16:15:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:12.502 16:15:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.502 16:15:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.502 16:15:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.502 16:15:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.502 16:15:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.502 16:15:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.502 16:15:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.502 16:15:32 -- accel/accel.sh@42 -- # jq -r . 00:06:12.502 [2024-11-09 16:15:32.065326] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:12.502 [2024-11-09 16:15:32.065426] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58415 ] 00:06:12.502 [2024-11-09 16:15:32.214292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.760 [2024-11-09 16:15:32.382999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.760 [2024-11-09 16:15:32.521538] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:13.326 [2024-11-09 16:15:32.852254] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:13.584 00:06:13.584 Compression does not support the verify option, aborting. 00:06:13.584 16:15:33 -- common/autotest_common.sh@653 -- # es=161 00:06:13.584 16:15:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.584 16:15:33 -- common/autotest_common.sh@662 -- # es=33 00:06:13.584 16:15:33 -- common/autotest_common.sh@663 -- # case "$es" in 00:06:13.584 16:15:33 -- common/autotest_common.sh@670 -- # es=1 00:06:13.584 16:15:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.584 00:06:13.584 real 0m1.080s 00:06:13.584 user 0m0.885s 00:06:13.584 sys 0m0.118s 00:06:13.584 16:15:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.584 16:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:13.584 ************************************ 00:06:13.584 END TEST accel_compress_verify 00:06:13.584 ************************************ 00:06:13.584 16:15:33 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:13.584 16:15:33 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:13.584 16:15:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.584 16:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:13.584 ************************************ 00:06:13.584 START TEST accel_wrong_workload 00:06:13.584 ************************************ 00:06:13.584 16:15:33 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:06:13.584 16:15:33 -- common/autotest_common.sh@650 -- # local es=0 00:06:13.584 16:15:33 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:13.584 16:15:33 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:13.584 16:15:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.584 16:15:33 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:13.584 16:15:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.584 16:15:33 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:06:13.584 16:15:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:13.584 16:15:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.584 16:15:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.584 16:15:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.584 16:15:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.584 16:15:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.584 16:15:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.584 16:15:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.584 16:15:33 -- accel/accel.sh@42 -- # jq -r . 00:06:13.584 Unsupported workload type: foobar 00:06:13.584 [2024-11-09 16:15:33.179000] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:13.584 accel_perf options: 00:06:13.584 [-h help message] 00:06:13.584 [-q queue depth per core] 00:06:13.584 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:13.584 [-T number of threads per core 00:06:13.584 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:13.584 [-t time in seconds] 00:06:13.584 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:13.584 [ dif_verify, , dif_generate, dif_generate_copy 00:06:13.584 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:13.584 [-l for compress/decompress workloads, name of uncompressed input file 00:06:13.584 [-S for crc32c workload, use this seed value (default 0) 00:06:13.584 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:13.584 [-f for fill workload, use this BYTE value (default 255) 00:06:13.584 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:13.584 [-y verify result if this switch is on] 00:06:13.584 [-a tasks to allocate per core (default: same value as -q)] 00:06:13.584 Can be used to spread operations across a wider range of memory. 00:06:13.584 16:15:33 -- common/autotest_common.sh@653 -- # es=1 00:06:13.584 16:15:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.584 16:15:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.584 16:15:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.584 00:06:13.584 real 0m0.045s 00:06:13.584 user 0m0.052s 00:06:13.584 sys 0m0.019s 00:06:13.584 16:15:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.585 ************************************ 00:06:13.585 END TEST accel_wrong_workload 00:06:13.585 ************************************ 00:06:13.585 16:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:13.585 16:15:33 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:13.585 16:15:33 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:13.585 16:15:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.585 16:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:13.585 ************************************ 00:06:13.585 START TEST accel_negative_buffers 00:06:13.585 ************************************ 00:06:13.585 16:15:33 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:13.585 16:15:33 -- common/autotest_common.sh@650 -- # local es=0 00:06:13.585 16:15:33 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:13.585 16:15:33 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:06:13.585 16:15:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.585 16:15:33 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:06:13.585 16:15:33 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:13.585 16:15:33 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:06:13.585 16:15:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:13.585 16:15:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.585 16:15:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.585 16:15:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.585 16:15:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.585 16:15:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.585 16:15:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.585 16:15:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.585 16:15:33 -- accel/accel.sh@42 -- # jq -r . 00:06:13.585 -x option must be non-negative. 00:06:13.585 [2024-11-09 16:15:33.257736] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:13.585 accel_perf options: 00:06:13.585 [-h help message] 00:06:13.585 [-q queue depth per core] 00:06:13.585 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:13.585 [-T number of threads per core 00:06:13.585 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:13.585 [-t time in seconds] 00:06:13.585 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:13.585 [ dif_verify, , dif_generate, dif_generate_copy 00:06:13.585 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:13.585 [-l for compress/decompress workloads, name of uncompressed input file 00:06:13.585 [-S for crc32c workload, use this seed value (default 0) 00:06:13.585 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:13.585 [-f for fill workload, use this BYTE value (default 255) 00:06:13.585 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:13.585 [-y verify result if this switch is on] 00:06:13.585 [-a tasks to allocate per core (default: same value as -q)] 00:06:13.585 Can be used to spread operations across a wider range of memory. 00:06:13.585 16:15:33 -- common/autotest_common.sh@653 -- # es=1 00:06:13.585 ************************************ 00:06:13.585 END TEST accel_negative_buffers 00:06:13.585 ************************************ 00:06:13.585 16:15:33 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:13.585 16:15:33 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:13.585 16:15:33 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:13.585 00:06:13.585 real 0m0.052s 00:06:13.585 user 0m0.055s 00:06:13.585 sys 0m0.029s 00:06:13.585 16:15:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:13.585 16:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:13.585 16:15:33 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:13.585 16:15:33 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:13.585 16:15:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:13.585 16:15:33 -- common/autotest_common.sh@10 -- # set +x 00:06:13.585 ************************************ 00:06:13.585 START TEST accel_crc32c 00:06:13.585 ************************************ 00:06:13.585 16:15:33 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:13.585 16:15:33 -- accel/accel.sh@16 -- # local accel_opc 00:06:13.585 16:15:33 -- accel/accel.sh@17 -- # local accel_module 00:06:13.585 16:15:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:13.585 16:15:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:13.585 16:15:33 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.585 16:15:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.585 16:15:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.585 16:15:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.585 16:15:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.585 16:15:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.585 16:15:33 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.585 16:15:33 -- accel/accel.sh@42 -- # jq -r . 00:06:13.585 [2024-11-09 16:15:33.347577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:13.585 [2024-11-09 16:15:33.347674] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58482 ] 00:06:13.844 [2024-11-09 16:15:33.495064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.101 [2024-11-09 16:15:33.664125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.999 16:15:35 -- accel/accel.sh@18 -- # out=' 00:06:15.999 SPDK Configuration: 00:06:15.999 Core mask: 0x1 00:06:15.999 00:06:15.999 Accel Perf Configuration: 00:06:15.999 Workload Type: crc32c 00:06:15.999 CRC-32C seed: 32 00:06:15.999 Transfer size: 4096 bytes 00:06:15.999 Vector count 1 00:06:15.999 Module: software 00:06:15.999 Queue depth: 32 00:06:15.999 Allocate depth: 32 00:06:15.999 # threads/core: 1 00:06:15.999 Run time: 1 seconds 00:06:15.999 Verify: Yes 00:06:15.999 00:06:15.999 Running for 1 seconds... 00:06:15.999 00:06:15.999 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:15.999 ------------------------------------------------------------------------------------ 00:06:15.999 0,0 455744/s 1780 MiB/s 0 0 00:06:15.999 ==================================================================================== 00:06:15.999 Total 455744/s 1780 MiB/s 0 0' 00:06:15.999 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:15.999 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:15.999 16:15:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:15.999 16:15:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.999 16:15:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.999 16:15:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:15.999 16:15:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.999 16:15:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.999 16:15:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.999 16:15:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.999 16:15:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.999 16:15:35 -- accel/accel.sh@42 -- # jq -r . 00:06:15.999 [2024-11-09 16:15:35.424303] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:15.999 [2024-11-09 16:15:35.424402] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58508 ] 00:06:15.999 [2024-11-09 16:15:35.574197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.999 [2024-11-09 16:15:35.743426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val= 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val= 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val=0x1 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val= 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val= 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val=crc32c 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val=32 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val= 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val=software 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val=32 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val=32 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val=1 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.257 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.257 16:15:35 -- accel/accel.sh@21 -- # val=Yes 00:06:16.257 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.258 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.258 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.258 16:15:35 -- accel/accel.sh@21 -- # val= 00:06:16.258 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.258 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.258 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:16.258 16:15:35 -- accel/accel.sh@21 -- # val= 00:06:16.258 16:15:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:16.258 16:15:35 -- accel/accel.sh@20 -- # IFS=: 00:06:16.258 16:15:35 -- accel/accel.sh@20 -- # read -r var val 00:06:17.677 16:15:37 -- accel/accel.sh@21 -- # val= 00:06:17.677 16:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.677 16:15:37 -- accel/accel.sh@20 -- # IFS=: 00:06:17.677 16:15:37 -- accel/accel.sh@20 -- # read -r var val 00:06:17.677 16:15:37 -- accel/accel.sh@21 -- # val= 00:06:17.677 16:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.677 16:15:37 -- accel/accel.sh@20 -- # IFS=: 00:06:17.677 16:15:37 -- accel/accel.sh@20 -- # read -r var val 00:06:17.677 16:15:37 -- accel/accel.sh@21 -- # val= 00:06:17.677 16:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.677 16:15:37 -- accel/accel.sh@20 -- # IFS=: 00:06:17.677 16:15:37 -- accel/accel.sh@20 -- # read -r var val 00:06:17.677 16:15:37 -- accel/accel.sh@21 -- # val= 00:06:17.677 16:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.677 16:15:37 -- accel/accel.sh@20 -- # IFS=: 00:06:17.677 16:15:37 -- accel/accel.sh@20 -- # read -r var val 00:06:17.677 16:15:37 -- accel/accel.sh@21 -- # val= 00:06:17.677 16:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.677 16:15:37 -- accel/accel.sh@20 -- # IFS=: 00:06:17.677 16:15:37 -- accel/accel.sh@20 -- # read -r var val 00:06:17.677 16:15:37 -- accel/accel.sh@21 -- # val= 00:06:17.677 16:15:37 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.677 16:15:37 -- accel/accel.sh@20 -- # IFS=: 00:06:17.677 16:15:37 -- accel/accel.sh@20 -- # read -r var val 00:06:17.677 16:15:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:17.677 ************************************ 00:06:17.677 END TEST accel_crc32c 00:06:17.677 ************************************ 00:06:17.677 16:15:37 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:17.677 16:15:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.677 00:06:17.677 real 0m4.078s 00:06:17.678 user 0m3.628s 00:06:17.678 sys 0m0.242s 00:06:17.678 16:15:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:17.678 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.678 16:15:37 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:17.678 16:15:37 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:17.678 16:15:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:17.678 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:06:17.946 ************************************ 00:06:17.946 START TEST accel_crc32c_C2 00:06:17.946 ************************************ 00:06:17.946 16:15:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:17.946 16:15:37 -- accel/accel.sh@16 -- # local accel_opc 00:06:17.946 16:15:37 -- accel/accel.sh@17 -- # local accel_module 00:06:17.946 16:15:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:17.946 16:15:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:17.946 16:15:37 -- accel/accel.sh@12 -- # build_accel_config 00:06:17.946 16:15:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:17.946 16:15:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.946 16:15:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.946 16:15:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:17.946 16:15:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:17.946 16:15:37 -- accel/accel.sh@41 -- # local IFS=, 00:06:17.946 16:15:37 -- accel/accel.sh@42 -- # jq -r . 00:06:17.946 [2024-11-09 16:15:37.464212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:17.946 [2024-11-09 16:15:37.464325] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58549 ] 00:06:17.946 [2024-11-09 16:15:37.610489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.204 [2024-11-09 16:15:37.749800] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.578 16:15:39 -- accel/accel.sh@18 -- # out=' 00:06:19.578 SPDK Configuration: 00:06:19.578 Core mask: 0x1 00:06:19.578 00:06:19.578 Accel Perf Configuration: 00:06:19.578 Workload Type: crc32c 00:06:19.578 CRC-32C seed: 0 00:06:19.578 Transfer size: 4096 bytes 00:06:19.578 Vector count 2 00:06:19.578 Module: software 00:06:19.578 Queue depth: 32 00:06:19.578 Allocate depth: 32 00:06:19.578 # threads/core: 1 00:06:19.578 Run time: 1 seconds 00:06:19.578 Verify: Yes 00:06:19.579 00:06:19.579 Running for 1 seconds... 00:06:19.579 00:06:19.579 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.579 ------------------------------------------------------------------------------------ 00:06:19.579 0,0 506912/s 3960 MiB/s 0 0 00:06:19.579 ==================================================================================== 00:06:19.579 Total 506912/s 1980 MiB/s 0 0' 00:06:19.579 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:19.579 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:19.579 16:15:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:19.579 16:15:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.579 16:15:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.579 16:15:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:19.579 16:15:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.579 16:15:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.579 16:15:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.579 16:15:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.579 16:15:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.579 16:15:39 -- accel/accel.sh@42 -- # jq -r . 00:06:19.836 [2024-11-09 16:15:39.370704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:19.836 [2024-11-09 16:15:39.370807] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58575 ] 00:06:19.836 [2024-11-09 16:15:39.517169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.093 [2024-11-09 16:15:39.653789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val=0x1 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val=crc32c 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val=0 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val=software 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val=32 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val=32 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val=1 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val=Yes 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:20.093 16:15:39 -- accel/accel.sh@21 -- # val= 00:06:20.093 16:15:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # IFS=: 00:06:20.093 16:15:39 -- accel/accel.sh@20 -- # read -r var val 00:06:21.467 16:15:41 -- accel/accel.sh@21 -- # val= 00:06:21.467 16:15:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.467 16:15:41 -- accel/accel.sh@20 -- # IFS=: 00:06:21.467 16:15:41 -- accel/accel.sh@20 -- # read -r var val 00:06:21.467 16:15:41 -- accel/accel.sh@21 -- # val= 00:06:21.467 16:15:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.467 16:15:41 -- accel/accel.sh@20 -- # IFS=: 00:06:21.467 16:15:41 -- accel/accel.sh@20 -- # read -r var val 00:06:21.467 16:15:41 -- accel/accel.sh@21 -- # val= 00:06:21.467 16:15:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.467 16:15:41 -- accel/accel.sh@20 -- # IFS=: 00:06:21.467 16:15:41 -- accel/accel.sh@20 -- # read -r var val 00:06:21.467 16:15:41 -- accel/accel.sh@21 -- # val= 00:06:21.467 16:15:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.467 16:15:41 -- accel/accel.sh@20 -- # IFS=: 00:06:21.467 16:15:41 -- accel/accel.sh@20 -- # read -r var val 00:06:21.467 16:15:41 -- accel/accel.sh@21 -- # val= 00:06:21.467 16:15:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.467 16:15:41 -- accel/accel.sh@20 -- # IFS=: 00:06:21.467 16:15:41 -- accel/accel.sh@20 -- # read -r var val 00:06:21.467 16:15:41 -- accel/accel.sh@21 -- # val= 00:06:21.467 16:15:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.467 16:15:41 -- accel/accel.sh@20 -- # IFS=: 00:06:21.467 16:15:41 -- accel/accel.sh@20 -- # read -r var val 00:06:21.467 16:15:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:21.467 16:15:41 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:21.467 16:15:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.467 00:06:21.467 real 0m3.809s 00:06:21.467 user 0m3.381s 00:06:21.467 sys 0m0.222s 00:06:21.467 16:15:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:21.467 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:06:21.725 ************************************ 00:06:21.725 END TEST accel_crc32c_C2 00:06:21.725 ************************************ 00:06:21.725 16:15:41 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:21.725 16:15:41 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:21.725 16:15:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.725 16:15:41 -- common/autotest_common.sh@10 -- # set +x 00:06:21.725 ************************************ 00:06:21.725 START TEST accel_copy 00:06:21.725 ************************************ 00:06:21.725 16:15:41 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:06:21.725 16:15:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.725 16:15:41 -- accel/accel.sh@17 -- # local accel_module 00:06:21.725 16:15:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:21.725 16:15:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:21.725 16:15:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.725 16:15:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.725 16:15:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.726 16:15:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.726 16:15:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.726 16:15:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.726 16:15:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.726 16:15:41 -- accel/accel.sh@42 -- # jq -r . 00:06:21.726 [2024-11-09 16:15:41.306055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:21.726 [2024-11-09 16:15:41.306244] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58616 ] 00:06:21.726 [2024-11-09 16:15:41.442470] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.983 [2024-11-09 16:15:41.580587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.880 16:15:43 -- accel/accel.sh@18 -- # out=' 00:06:23.880 SPDK Configuration: 00:06:23.880 Core mask: 0x1 00:06:23.880 00:06:23.880 Accel Perf Configuration: 00:06:23.880 Workload Type: copy 00:06:23.880 Transfer size: 4096 bytes 00:06:23.880 Vector count 1 00:06:23.880 Module: software 00:06:23.880 Queue depth: 32 00:06:23.880 Allocate depth: 32 00:06:23.880 # threads/core: 1 00:06:23.880 Run time: 1 seconds 00:06:23.880 Verify: Yes 00:06:23.880 00:06:23.880 Running for 1 seconds... 00:06:23.880 00:06:23.880 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:23.880 ------------------------------------------------------------------------------------ 00:06:23.880 0,0 374272/s 1462 MiB/s 0 0 00:06:23.880 ==================================================================================== 00:06:23.880 Total 374272/s 1462 MiB/s 0 0' 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.880 16:15:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:23.880 16:15:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:23.880 16:15:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:23.880 16:15:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:23.880 16:15:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:23.880 16:15:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:23.880 16:15:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:23.880 16:15:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:23.880 16:15:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:23.880 16:15:43 -- accel/accel.sh@42 -- # jq -r . 00:06:23.880 [2024-11-09 16:15:43.189852] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:23.880 [2024-11-09 16:15:43.189956] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58641 ] 00:06:23.880 [2024-11-09 16:15:43.336637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.880 [2024-11-09 16:15:43.474967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.880 16:15:43 -- accel/accel.sh@21 -- # val= 00:06:23.880 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.880 16:15:43 -- accel/accel.sh@21 -- # val= 00:06:23.880 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.880 16:15:43 -- accel/accel.sh@21 -- # val=0x1 00:06:23.880 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.880 16:15:43 -- accel/accel.sh@21 -- # val= 00:06:23.880 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.880 16:15:43 -- accel/accel.sh@21 -- # val= 00:06:23.880 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.880 16:15:43 -- accel/accel.sh@21 -- # val=copy 00:06:23.880 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.880 16:15:43 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.880 16:15:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:23.880 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.880 16:15:43 -- accel/accel.sh@21 -- # val= 00:06:23.880 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.880 16:15:43 -- accel/accel.sh@21 -- # val=software 00:06:23.880 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.880 16:15:43 -- accel/accel.sh@23 -- # accel_module=software 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.880 16:15:43 -- accel/accel.sh@21 -- # val=32 00:06:23.880 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.880 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.881 16:15:43 -- accel/accel.sh@21 -- # val=32 00:06:23.881 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.881 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.881 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.881 16:15:43 -- accel/accel.sh@21 -- # val=1 00:06:23.881 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.881 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.881 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.881 16:15:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:23.881 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.881 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.881 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.881 16:15:43 -- accel/accel.sh@21 -- # val=Yes 00:06:23.881 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.881 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.881 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.881 16:15:43 -- accel/accel.sh@21 -- # val= 00:06:23.881 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.881 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.881 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:23.881 16:15:43 -- accel/accel.sh@21 -- # val= 00:06:23.881 16:15:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:23.881 16:15:43 -- accel/accel.sh@20 -- # IFS=: 00:06:23.881 16:15:43 -- accel/accel.sh@20 -- # read -r var val 00:06:25.279 16:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.279 16:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.279 16:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.279 16:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.279 16:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.280 16:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.280 16:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.280 16:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.280 16:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.280 16:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.280 16:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.280 16:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.280 16:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.280 16:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.280 16:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.280 16:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.280 16:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.280 16:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.280 16:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.280 16:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.280 16:15:45 -- accel/accel.sh@21 -- # val= 00:06:25.280 16:15:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.280 16:15:45 -- accel/accel.sh@20 -- # IFS=: 00:06:25.280 16:15:45 -- accel/accel.sh@20 -- # read -r var val 00:06:25.537 16:15:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:25.537 16:15:45 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:25.537 ************************************ 00:06:25.537 END TEST accel_copy 00:06:25.537 ************************************ 00:06:25.537 16:15:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.537 00:06:25.537 real 0m3.783s 00:06:25.537 user 0m3.373s 00:06:25.537 sys 0m0.209s 00:06:25.537 16:15:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:25.537 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.537 16:15:45 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.537 16:15:45 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:06:25.537 16:15:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:25.537 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:06:25.537 ************************************ 00:06:25.537 START TEST accel_fill 00:06:25.537 ************************************ 00:06:25.537 16:15:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.537 16:15:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:25.537 16:15:45 -- accel/accel.sh@17 -- # local accel_module 00:06:25.537 16:15:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.537 16:15:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:25.537 16:15:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.537 16:15:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.537 16:15:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.537 16:15:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.537 16:15:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.537 16:15:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.537 16:15:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.537 16:15:45 -- accel/accel.sh@42 -- # jq -r . 00:06:25.538 [2024-11-09 16:15:45.130657] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:25.538 [2024-11-09 16:15:45.130750] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58678 ] 00:06:25.538 [2024-11-09 16:15:45.277351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.795 [2024-11-09 16:15:45.415112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.691 16:15:46 -- accel/accel.sh@18 -- # out=' 00:06:27.691 SPDK Configuration: 00:06:27.691 Core mask: 0x1 00:06:27.691 00:06:27.691 Accel Perf Configuration: 00:06:27.691 Workload Type: fill 00:06:27.691 Fill pattern: 0x80 00:06:27.691 Transfer size: 4096 bytes 00:06:27.691 Vector count 1 00:06:27.691 Module: software 00:06:27.691 Queue depth: 64 00:06:27.691 Allocate depth: 64 00:06:27.691 # threads/core: 1 00:06:27.691 Run time: 1 seconds 00:06:27.691 Verify: Yes 00:06:27.691 00:06:27.691 Running for 1 seconds... 00:06:27.691 00:06:27.691 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:27.691 ------------------------------------------------------------------------------------ 00:06:27.691 0,0 599616/s 2342 MiB/s 0 0 00:06:27.691 ==================================================================================== 00:06:27.691 Total 599616/s 2342 MiB/s 0 0' 00:06:27.691 16:15:46 -- accel/accel.sh@20 -- # IFS=: 00:06:27.691 16:15:46 -- accel/accel.sh@20 -- # read -r var val 00:06:27.691 16:15:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:27.691 16:15:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:27.692 16:15:46 -- accel/accel.sh@12 -- # build_accel_config 00:06:27.692 16:15:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:27.692 16:15:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:27.692 16:15:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:27.692 16:15:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:27.692 16:15:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:27.692 16:15:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:27.692 16:15:46 -- accel/accel.sh@42 -- # jq -r . 00:06:27.692 [2024-11-09 16:15:47.023847] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:27.692 [2024-11-09 16:15:47.023948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58698 ] 00:06:27.692 [2024-11-09 16:15:47.169098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.692 [2024-11-09 16:15:47.308146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val= 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val= 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val=0x1 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val= 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val= 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val=fill 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val=0x80 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val= 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val=software 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val=64 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val=64 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val=1 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val=Yes 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val= 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:27.692 16:15:47 -- accel/accel.sh@21 -- # val= 00:06:27.692 16:15:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # IFS=: 00:06:27.692 16:15:47 -- accel/accel.sh@20 -- # read -r var val 00:06:29.589 16:15:48 -- accel/accel.sh@21 -- # val= 00:06:29.589 16:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.589 16:15:48 -- accel/accel.sh@20 -- # IFS=: 00:06:29.589 16:15:48 -- accel/accel.sh@20 -- # read -r var val 00:06:29.589 16:15:48 -- accel/accel.sh@21 -- # val= 00:06:29.589 16:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.589 16:15:48 -- accel/accel.sh@20 -- # IFS=: 00:06:29.589 16:15:48 -- accel/accel.sh@20 -- # read -r var val 00:06:29.589 16:15:48 -- accel/accel.sh@21 -- # val= 00:06:29.589 16:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.589 16:15:48 -- accel/accel.sh@20 -- # IFS=: 00:06:29.589 16:15:48 -- accel/accel.sh@20 -- # read -r var val 00:06:29.589 16:15:48 -- accel/accel.sh@21 -- # val= 00:06:29.589 16:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.589 16:15:48 -- accel/accel.sh@20 -- # IFS=: 00:06:29.589 16:15:48 -- accel/accel.sh@20 -- # read -r var val 00:06:29.589 16:15:48 -- accel/accel.sh@21 -- # val= 00:06:29.589 16:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.589 16:15:48 -- accel/accel.sh@20 -- # IFS=: 00:06:29.589 16:15:48 -- accel/accel.sh@20 -- # read -r var val 00:06:29.589 16:15:48 -- accel/accel.sh@21 -- # val= 00:06:29.589 16:15:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.589 16:15:48 -- accel/accel.sh@20 -- # IFS=: 00:06:29.589 16:15:48 -- accel/accel.sh@20 -- # read -r var val 00:06:29.589 16:15:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.589 16:15:48 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:29.589 16:15:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.589 00:06:29.589 real 0m3.794s 00:06:29.589 user 0m3.361s 00:06:29.589 sys 0m0.230s 00:06:29.589 16:15:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:29.589 16:15:48 -- common/autotest_common.sh@10 -- # set +x 00:06:29.589 ************************************ 00:06:29.589 END TEST accel_fill 00:06:29.589 ************************************ 00:06:29.589 16:15:48 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:29.589 16:15:48 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:29.589 16:15:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:29.589 16:15:48 -- common/autotest_common.sh@10 -- # set +x 00:06:29.589 ************************************ 00:06:29.589 START TEST accel_copy_crc32c 00:06:29.589 ************************************ 00:06:29.589 16:15:48 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:06:29.589 16:15:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.589 16:15:48 -- accel/accel.sh@17 -- # local accel_module 00:06:29.589 16:15:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:29.589 16:15:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:29.589 16:15:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.589 16:15:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.589 16:15:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.589 16:15:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.589 16:15:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.589 16:15:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.589 16:15:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.589 16:15:48 -- accel/accel.sh@42 -- # jq -r . 00:06:29.589 [2024-11-09 16:15:48.960283] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:29.589 [2024-11-09 16:15:48.960384] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58739 ] 00:06:29.589 [2024-11-09 16:15:49.107661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.589 [2024-11-09 16:15:49.276188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.507 16:15:51 -- accel/accel.sh@18 -- # out=' 00:06:31.507 SPDK Configuration: 00:06:31.507 Core mask: 0x1 00:06:31.507 00:06:31.507 Accel Perf Configuration: 00:06:31.507 Workload Type: copy_crc32c 00:06:31.507 CRC-32C seed: 0 00:06:31.507 Vector size: 4096 bytes 00:06:31.507 Transfer size: 4096 bytes 00:06:31.507 Vector count 1 00:06:31.507 Module: software 00:06:31.507 Queue depth: 32 00:06:31.507 Allocate depth: 32 00:06:31.507 # threads/core: 1 00:06:31.507 Run time: 1 seconds 00:06:31.507 Verify: Yes 00:06:31.507 00:06:31.507 Running for 1 seconds... 00:06:31.507 00:06:31.507 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.507 ------------------------------------------------------------------------------------ 00:06:31.507 0,0 238592/s 932 MiB/s 0 0 00:06:31.507 ==================================================================================== 00:06:31.507 Total 238592/s 932 MiB/s 0 0' 00:06:31.507 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.507 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.507 16:15:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:31.507 16:15:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:31.507 16:15:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.507 16:15:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.507 16:15:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.507 16:15:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.507 16:15:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.507 16:15:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.507 16:15:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.507 16:15:51 -- accel/accel.sh@42 -- # jq -r . 00:06:31.507 [2024-11-09 16:15:51.039530] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:31.507 [2024-11-09 16:15:51.039608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58765 ] 00:06:31.507 [2024-11-09 16:15:51.181572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.769 [2024-11-09 16:15:51.364345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val= 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val= 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val=0x1 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val= 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val= 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val=0 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val= 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val=software 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val=32 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val=32 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val=1 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val=Yes 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val= 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:31.769 16:15:51 -- accel/accel.sh@21 -- # val= 00:06:31.769 16:15:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # IFS=: 00:06:31.769 16:15:51 -- accel/accel.sh@20 -- # read -r var val 00:06:33.671 16:15:52 -- accel/accel.sh@21 -- # val= 00:06:33.671 16:15:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.671 16:15:52 -- accel/accel.sh@20 -- # IFS=: 00:06:33.671 16:15:52 -- accel/accel.sh@20 -- # read -r var val 00:06:33.671 16:15:52 -- accel/accel.sh@21 -- # val= 00:06:33.671 16:15:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.671 16:15:52 -- accel/accel.sh@20 -- # IFS=: 00:06:33.671 16:15:52 -- accel/accel.sh@20 -- # read -r var val 00:06:33.671 16:15:52 -- accel/accel.sh@21 -- # val= 00:06:33.671 16:15:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.671 16:15:52 -- accel/accel.sh@20 -- # IFS=: 00:06:33.671 16:15:52 -- accel/accel.sh@20 -- # read -r var val 00:06:33.671 16:15:52 -- accel/accel.sh@21 -- # val= 00:06:33.671 16:15:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.671 16:15:52 -- accel/accel.sh@20 -- # IFS=: 00:06:33.671 16:15:52 -- accel/accel.sh@20 -- # read -r var val 00:06:33.671 16:15:52 -- accel/accel.sh@21 -- # val= 00:06:33.671 16:15:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.671 16:15:52 -- accel/accel.sh@20 -- # IFS=: 00:06:33.671 16:15:52 -- accel/accel.sh@20 -- # read -r var val 00:06:33.671 16:15:52 -- accel/accel.sh@21 -- # val= 00:06:33.671 16:15:52 -- accel/accel.sh@22 -- # case "$var" in 00:06:33.671 16:15:52 -- accel/accel.sh@20 -- # IFS=: 00:06:33.671 16:15:52 -- accel/accel.sh@20 -- # read -r var val 00:06:33.671 16:15:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:33.671 16:15:52 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:33.671 16:15:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:33.671 00:06:33.671 real 0m4.064s 00:06:33.671 user 0m3.629s 00:06:33.671 sys 0m0.230s 00:06:33.671 16:15:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:33.671 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:06:33.671 ************************************ 00:06:33.671 END TEST accel_copy_crc32c 00:06:33.671 ************************************ 00:06:33.671 16:15:53 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:33.671 16:15:53 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:33.671 16:15:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:33.671 16:15:53 -- common/autotest_common.sh@10 -- # set +x 00:06:33.671 ************************************ 00:06:33.671 START TEST accel_copy_crc32c_C2 00:06:33.671 ************************************ 00:06:33.671 16:15:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:33.671 16:15:53 -- accel/accel.sh@16 -- # local accel_opc 00:06:33.671 16:15:53 -- accel/accel.sh@17 -- # local accel_module 00:06:33.671 16:15:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:33.671 16:15:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:33.671 16:15:53 -- accel/accel.sh@12 -- # build_accel_config 00:06:33.671 16:15:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:33.671 16:15:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.671 16:15:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.671 16:15:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:33.671 16:15:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:33.671 16:15:53 -- accel/accel.sh@41 -- # local IFS=, 00:06:33.671 16:15:53 -- accel/accel.sh@42 -- # jq -r . 00:06:33.671 [2024-11-09 16:15:53.062279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:33.671 [2024-11-09 16:15:53.062394] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58806 ] 00:06:33.671 [2024-11-09 16:15:53.207569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.671 [2024-11-09 16:15:53.359631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.571 16:15:54 -- accel/accel.sh@18 -- # out=' 00:06:35.571 SPDK Configuration: 00:06:35.571 Core mask: 0x1 00:06:35.571 00:06:35.571 Accel Perf Configuration: 00:06:35.571 Workload Type: copy_crc32c 00:06:35.571 CRC-32C seed: 0 00:06:35.571 Vector size: 4096 bytes 00:06:35.571 Transfer size: 8192 bytes 00:06:35.571 Vector count 2 00:06:35.571 Module: software 00:06:35.571 Queue depth: 32 00:06:35.571 Allocate depth: 32 00:06:35.571 # threads/core: 1 00:06:35.571 Run time: 1 seconds 00:06:35.571 Verify: Yes 00:06:35.571 00:06:35.571 Running for 1 seconds... 00:06:35.571 00:06:35.571 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:35.571 ------------------------------------------------------------------------------------ 00:06:35.571 0,0 218976/s 1710 MiB/s 0 0 00:06:35.571 ==================================================================================== 00:06:35.571 Total 218976/s 855 MiB/s 0 0' 00:06:35.571 16:15:54 -- accel/accel.sh@20 -- # IFS=: 00:06:35.571 16:15:54 -- accel/accel.sh@20 -- # read -r var val 00:06:35.571 16:15:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:35.571 16:15:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:35.571 16:15:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.571 16:15:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.571 16:15:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.571 16:15:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.571 16:15:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.571 16:15:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.571 16:15:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.571 16:15:54 -- accel/accel.sh@42 -- # jq -r . 00:06:35.571 [2024-11-09 16:15:54.992369] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:35.571 [2024-11-09 16:15:54.992473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58832 ] 00:06:35.571 [2024-11-09 16:15:55.140175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.571 [2024-11-09 16:15:55.323404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val= 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val= 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val=0x1 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val= 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val= 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val=0 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val= 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val=software 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val=32 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val=32 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val=1 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val=Yes 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val= 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:35.829 16:15:55 -- accel/accel.sh@21 -- # val= 00:06:35.829 16:15:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # IFS=: 00:06:35.829 16:15:55 -- accel/accel.sh@20 -- # read -r var val 00:06:38.361 16:15:57 -- accel/accel.sh@21 -- # val= 00:06:38.361 16:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.361 16:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:38.361 16:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:38.361 16:15:57 -- accel/accel.sh@21 -- # val= 00:06:38.361 16:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.361 16:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:38.361 16:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:38.361 16:15:57 -- accel/accel.sh@21 -- # val= 00:06:38.361 16:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.361 16:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:38.361 16:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:38.361 16:15:57 -- accel/accel.sh@21 -- # val= 00:06:38.361 16:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.361 16:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:38.361 16:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:38.361 16:15:57 -- accel/accel.sh@21 -- # val= 00:06:38.361 16:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.361 16:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:38.361 16:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:38.361 16:15:57 -- accel/accel.sh@21 -- # val= 00:06:38.361 16:15:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.361 16:15:57 -- accel/accel.sh@20 -- # IFS=: 00:06:38.361 16:15:57 -- accel/accel.sh@20 -- # read -r var val 00:06:38.361 16:15:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.361 16:15:57 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:38.361 16:15:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.361 00:06:38.361 real 0m4.789s 00:06:38.361 user 0m3.607s 00:06:38.361 sys 0m0.226s 00:06:38.361 16:15:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:38.361 16:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:38.361 ************************************ 00:06:38.361 END TEST accel_copy_crc32c_C2 00:06:38.361 ************************************ 00:06:38.361 16:15:57 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:38.361 16:15:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:38.361 16:15:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.361 16:15:57 -- common/autotest_common.sh@10 -- # set +x 00:06:38.361 ************************************ 00:06:38.361 START TEST accel_dualcast 00:06:38.361 ************************************ 00:06:38.361 16:15:57 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:06:38.361 16:15:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.361 16:15:57 -- accel/accel.sh@17 -- # local accel_module 00:06:38.361 16:15:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:38.361 16:15:57 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:38.361 16:15:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.361 16:15:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.361 16:15:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.361 16:15:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.361 16:15:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.361 16:15:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.361 16:15:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.361 16:15:57 -- accel/accel.sh@42 -- # jq -r . 00:06:38.361 [2024-11-09 16:15:57.892876] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:38.361 [2024-11-09 16:15:57.893071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58886 ] 00:06:38.361 [2024-11-09 16:15:58.041407] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.622 [2024-11-09 16:15:58.221760] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.527 16:15:59 -- accel/accel.sh@18 -- # out=' 00:06:40.528 SPDK Configuration: 00:06:40.528 Core mask: 0x1 00:06:40.528 00:06:40.528 Accel Perf Configuration: 00:06:40.528 Workload Type: dualcast 00:06:40.528 Transfer size: 4096 bytes 00:06:40.528 Vector count 1 00:06:40.528 Module: software 00:06:40.528 Queue depth: 32 00:06:40.528 Allocate depth: 32 00:06:40.528 # threads/core: 1 00:06:40.528 Run time: 1 seconds 00:06:40.528 Verify: Yes 00:06:40.528 00:06:40.528 Running for 1 seconds... 00:06:40.528 00:06:40.528 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.528 ------------------------------------------------------------------------------------ 00:06:40.528 0,0 305856/s 1194 MiB/s 0 0 00:06:40.528 ==================================================================================== 00:06:40.528 Total 305856/s 1194 MiB/s 0 0' 00:06:40.528 16:15:59 -- accel/accel.sh@20 -- # IFS=: 00:06:40.528 16:15:59 -- accel/accel.sh@20 -- # read -r var val 00:06:40.528 16:15:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:40.528 16:15:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:40.528 16:15:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.528 16:15:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.528 16:15:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.528 16:15:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.528 16:15:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.528 16:15:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.528 16:15:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.528 16:15:59 -- accel/accel.sh@42 -- # jq -r . 00:06:40.528 [2024-11-09 16:16:00.008538] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:40.528 [2024-11-09 16:16:00.008641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58912 ] 00:06:40.528 [2024-11-09 16:16:00.158840] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.789 [2024-11-09 16:16:00.352162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val= 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val= 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val=0x1 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val= 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val= 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val=dualcast 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val= 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val=software 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val=32 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val=32 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val=1 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val=Yes 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val= 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:40.789 16:16:00 -- accel/accel.sh@21 -- # val= 00:06:40.789 16:16:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # IFS=: 00:06:40.789 16:16:00 -- accel/accel.sh@20 -- # read -r var val 00:06:42.704 16:16:02 -- accel/accel.sh@21 -- # val= 00:06:42.704 16:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.704 16:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:42.704 16:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:42.704 16:16:02 -- accel/accel.sh@21 -- # val= 00:06:42.704 16:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.704 16:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:42.704 16:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:42.704 16:16:02 -- accel/accel.sh@21 -- # val= 00:06:42.704 16:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.704 16:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:42.704 16:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:42.704 16:16:02 -- accel/accel.sh@21 -- # val= 00:06:42.704 16:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.704 16:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:42.704 16:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:42.704 16:16:02 -- accel/accel.sh@21 -- # val= 00:06:42.704 16:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.704 16:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:42.704 16:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:42.704 16:16:02 -- accel/accel.sh@21 -- # val= 00:06:42.704 16:16:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:42.704 16:16:02 -- accel/accel.sh@20 -- # IFS=: 00:06:42.704 16:16:02 -- accel/accel.sh@20 -- # read -r var val 00:06:42.704 16:16:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:42.704 16:16:02 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:42.704 16:16:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:42.704 00:06:42.704 real 0m4.243s 00:06:42.704 user 0m3.787s 00:06:42.704 sys 0m0.243s 00:06:42.704 ************************************ 00:06:42.704 END TEST accel_dualcast 00:06:42.704 ************************************ 00:06:42.704 16:16:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:42.704 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:06:42.704 16:16:02 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:42.704 16:16:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:42.704 16:16:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:42.704 16:16:02 -- common/autotest_common.sh@10 -- # set +x 00:06:42.704 ************************************ 00:06:42.704 START TEST accel_compare 00:06:42.704 ************************************ 00:06:42.704 16:16:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:06:42.704 16:16:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.704 16:16:02 -- accel/accel.sh@17 -- # local accel_module 00:06:42.704 16:16:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:42.704 16:16:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:42.704 16:16:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.704 16:16:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.704 16:16:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.704 16:16:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.704 16:16:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.704 16:16:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.704 16:16:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.704 16:16:02 -- accel/accel.sh@42 -- # jq -r . 00:06:42.704 [2024-11-09 16:16:02.200653] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:42.704 [2024-11-09 16:16:02.200729] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58953 ] 00:06:42.704 [2024-11-09 16:16:02.343139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.969 [2024-11-09 16:16:02.517384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.896 16:16:04 -- accel/accel.sh@18 -- # out=' 00:06:44.896 SPDK Configuration: 00:06:44.896 Core mask: 0x1 00:06:44.896 00:06:44.896 Accel Perf Configuration: 00:06:44.896 Workload Type: compare 00:06:44.896 Transfer size: 4096 bytes 00:06:44.896 Vector count 1 00:06:44.896 Module: software 00:06:44.896 Queue depth: 32 00:06:44.896 Allocate depth: 32 00:06:44.896 # threads/core: 1 00:06:44.896 Run time: 1 seconds 00:06:44.896 Verify: Yes 00:06:44.896 00:06:44.896 Running for 1 seconds... 00:06:44.896 00:06:44.896 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:44.896 ------------------------------------------------------------------------------------ 00:06:44.896 0,0 424256/s 1657 MiB/s 0 0 00:06:44.896 ==================================================================================== 00:06:44.896 Total 424256/s 1657 MiB/s 0 0' 00:06:44.896 16:16:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:44.896 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:44.896 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:44.896 16:16:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:44.896 16:16:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.896 16:16:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.896 16:16:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.896 16:16:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.896 16:16:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.896 16:16:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.896 16:16:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.896 16:16:04 -- accel/accel.sh@42 -- # jq -r . 00:06:44.896 [2024-11-09 16:16:04.281643] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:44.896 [2024-11-09 16:16:04.281751] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58979 ] 00:06:44.896 [2024-11-09 16:16:04.428304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.896 [2024-11-09 16:16:04.608896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.157 16:16:04 -- accel/accel.sh@21 -- # val= 00:06:45.157 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.157 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.157 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.157 16:16:04 -- accel/accel.sh@21 -- # val= 00:06:45.157 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.157 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.157 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.157 16:16:04 -- accel/accel.sh@21 -- # val=0x1 00:06:45.157 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.157 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.157 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.157 16:16:04 -- accel/accel.sh@21 -- # val= 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.158 16:16:04 -- accel/accel.sh@21 -- # val= 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.158 16:16:04 -- accel/accel.sh@21 -- # val=compare 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.158 16:16:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.158 16:16:04 -- accel/accel.sh@21 -- # val= 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.158 16:16:04 -- accel/accel.sh@21 -- # val=software 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.158 16:16:04 -- accel/accel.sh@21 -- # val=32 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.158 16:16:04 -- accel/accel.sh@21 -- # val=32 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.158 16:16:04 -- accel/accel.sh@21 -- # val=1 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.158 16:16:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.158 16:16:04 -- accel/accel.sh@21 -- # val=Yes 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.158 16:16:04 -- accel/accel.sh@21 -- # val= 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:45.158 16:16:04 -- accel/accel.sh@21 -- # val= 00:06:45.158 16:16:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # IFS=: 00:06:45.158 16:16:04 -- accel/accel.sh@20 -- # read -r var val 00:06:46.536 16:16:06 -- accel/accel.sh@21 -- # val= 00:06:46.536 16:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.536 16:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:46.536 16:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:46.536 16:16:06 -- accel/accel.sh@21 -- # val= 00:06:46.536 16:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.536 16:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:46.536 16:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:46.536 16:16:06 -- accel/accel.sh@21 -- # val= 00:06:46.536 16:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.536 16:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:46.536 16:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:46.536 16:16:06 -- accel/accel.sh@21 -- # val= 00:06:46.536 16:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.536 16:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:46.536 16:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:46.536 16:16:06 -- accel/accel.sh@21 -- # val= 00:06:46.536 16:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.536 16:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:46.536 16:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:46.536 16:16:06 -- accel/accel.sh@21 -- # val= 00:06:46.536 16:16:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.536 16:16:06 -- accel/accel.sh@20 -- # IFS=: 00:06:46.536 16:16:06 -- accel/accel.sh@20 -- # read -r var val 00:06:46.536 16:16:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:46.536 16:16:06 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:46.536 16:16:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.536 00:06:46.536 real 0m4.048s 00:06:46.536 user 0m3.613s 00:06:46.536 sys 0m0.229s 00:06:46.536 ************************************ 00:06:46.536 END TEST accel_compare 00:06:46.536 ************************************ 00:06:46.536 16:16:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:46.536 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:06:46.536 16:16:06 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:46.536 16:16:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:46.536 16:16:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:46.536 16:16:06 -- common/autotest_common.sh@10 -- # set +x 00:06:46.536 ************************************ 00:06:46.536 START TEST accel_xor 00:06:46.536 ************************************ 00:06:46.536 16:16:06 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:06:46.536 16:16:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:46.536 16:16:06 -- accel/accel.sh@17 -- # local accel_module 00:06:46.536 16:16:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:46.536 16:16:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:46.536 16:16:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.536 16:16:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.536 16:16:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.536 16:16:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.536 16:16:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.536 16:16:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.536 16:16:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.536 16:16:06 -- accel/accel.sh@42 -- # jq -r . 00:06:46.536 [2024-11-09 16:16:06.299670] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:46.536 [2024-11-09 16:16:06.299768] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59021 ] 00:06:46.795 [2024-11-09 16:16:06.447145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.053 [2024-11-09 16:16:06.591342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.426 16:16:08 -- accel/accel.sh@18 -- # out=' 00:06:48.426 SPDK Configuration: 00:06:48.426 Core mask: 0x1 00:06:48.426 00:06:48.426 Accel Perf Configuration: 00:06:48.426 Workload Type: xor 00:06:48.426 Source buffers: 2 00:06:48.426 Transfer size: 4096 bytes 00:06:48.426 Vector count 1 00:06:48.426 Module: software 00:06:48.426 Queue depth: 32 00:06:48.426 Allocate depth: 32 00:06:48.426 # threads/core: 1 00:06:48.426 Run time: 1 seconds 00:06:48.426 Verify: Yes 00:06:48.426 00:06:48.426 Running for 1 seconds... 00:06:48.426 00:06:48.426 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.426 ------------------------------------------------------------------------------------ 00:06:48.426 0,0 440800/s 1721 MiB/s 0 0 00:06:48.426 ==================================================================================== 00:06:48.426 Total 440800/s 1721 MiB/s 0 0' 00:06:48.426 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.427 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.427 16:16:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:48.427 16:16:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:48.427 16:16:08 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.427 16:16:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.427 16:16:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.427 16:16:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.427 16:16:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.427 16:16:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.427 16:16:08 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.427 16:16:08 -- accel/accel.sh@42 -- # jq -r . 00:06:48.685 [2024-11-09 16:16:08.207773] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:48.685 [2024-11-09 16:16:08.207873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59049 ] 00:06:48.685 [2024-11-09 16:16:08.345885] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.944 [2024-11-09 16:16:08.487913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val=0x1 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val=xor 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val=2 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val=software 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@23 -- # accel_module=software 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val=32 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val=32 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val=1 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val=Yes 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:48.944 16:16:08 -- accel/accel.sh@21 -- # val= 00:06:48.944 16:16:08 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # IFS=: 00:06:48.944 16:16:08 -- accel/accel.sh@20 -- # read -r var val 00:06:50.322 16:16:10 -- accel/accel.sh@21 -- # val= 00:06:50.322 16:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.322 16:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.322 16:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.322 16:16:10 -- accel/accel.sh@21 -- # val= 00:06:50.322 16:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.322 16:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.322 16:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.322 16:16:10 -- accel/accel.sh@21 -- # val= 00:06:50.322 16:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.322 16:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.322 16:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.322 16:16:10 -- accel/accel.sh@21 -- # val= 00:06:50.322 16:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.322 16:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.322 16:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.322 16:16:10 -- accel/accel.sh@21 -- # val= 00:06:50.322 16:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.322 16:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.322 16:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.322 16:16:10 -- accel/accel.sh@21 -- # val= 00:06:50.322 16:16:10 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.322 16:16:10 -- accel/accel.sh@20 -- # IFS=: 00:06:50.322 16:16:10 -- accel/accel.sh@20 -- # read -r var val 00:06:50.322 16:16:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.322 16:16:10 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:50.322 16:16:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.322 00:06:50.322 real 0m3.806s 00:06:50.322 user 0m3.380s 00:06:50.322 sys 0m0.223s 00:06:50.322 16:16:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:50.322 ************************************ 00:06:50.322 END TEST accel_xor 00:06:50.322 ************************************ 00:06:50.322 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:06:50.656 16:16:10 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:50.656 16:16:10 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:50.656 16:16:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:50.656 16:16:10 -- common/autotest_common.sh@10 -- # set +x 00:06:50.656 ************************************ 00:06:50.656 START TEST accel_xor 00:06:50.656 ************************************ 00:06:50.656 16:16:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:06:50.656 16:16:10 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.656 16:16:10 -- accel/accel.sh@17 -- # local accel_module 00:06:50.656 16:16:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:50.656 16:16:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:50.656 16:16:10 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.656 16:16:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.656 16:16:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.656 16:16:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.656 16:16:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.656 16:16:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.656 16:16:10 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.656 16:16:10 -- accel/accel.sh@42 -- # jq -r . 00:06:50.656 [2024-11-09 16:16:10.165656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:50.656 [2024-11-09 16:16:10.165758] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59085 ] 00:06:50.656 [2024-11-09 16:16:10.313601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.918 [2024-11-09 16:16:10.496431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.818 16:16:12 -- accel/accel.sh@18 -- # out=' 00:06:52.818 SPDK Configuration: 00:06:52.818 Core mask: 0x1 00:06:52.818 00:06:52.818 Accel Perf Configuration: 00:06:52.818 Workload Type: xor 00:06:52.818 Source buffers: 3 00:06:52.818 Transfer size: 4096 bytes 00:06:52.818 Vector count 1 00:06:52.818 Module: software 00:06:52.818 Queue depth: 32 00:06:52.818 Allocate depth: 32 00:06:52.818 # threads/core: 1 00:06:52.818 Run time: 1 seconds 00:06:52.818 Verify: Yes 00:06:52.818 00:06:52.818 Running for 1 seconds... 00:06:52.818 00:06:52.818 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.818 ------------------------------------------------------------------------------------ 00:06:52.818 0,0 329792/s 1288 MiB/s 0 0 00:06:52.818 ==================================================================================== 00:06:52.818 Total 329792/s 1288 MiB/s 0 0' 00:06:52.818 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:52.818 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:52.818 16:16:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:52.818 16:16:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:52.818 16:16:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.818 16:16:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.818 16:16:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.818 16:16:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.818 16:16:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.818 16:16:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.818 16:16:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.818 16:16:12 -- accel/accel.sh@42 -- # jq -r . 00:06:52.818 [2024-11-09 16:16:12.152384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:52.818 [2024-11-09 16:16:12.152884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59111 ] 00:06:52.818 [2024-11-09 16:16:12.301636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.818 [2024-11-09 16:16:12.481947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val=0x1 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val=xor 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val=3 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val=software 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@23 -- # accel_module=software 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val=32 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val=32 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val=1 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val=Yes 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:53.077 16:16:12 -- accel/accel.sh@21 -- # val= 00:06:53.077 16:16:12 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # IFS=: 00:06:53.077 16:16:12 -- accel/accel.sh@20 -- # read -r var val 00:06:54.456 16:16:14 -- accel/accel.sh@21 -- # val= 00:06:54.456 16:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.456 16:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.456 16:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.456 16:16:14 -- accel/accel.sh@21 -- # val= 00:06:54.456 16:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.456 16:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.456 16:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.456 16:16:14 -- accel/accel.sh@21 -- # val= 00:06:54.456 16:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.456 16:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.456 16:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.456 16:16:14 -- accel/accel.sh@21 -- # val= 00:06:54.456 16:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.456 16:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.456 16:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.456 16:16:14 -- accel/accel.sh@21 -- # val= 00:06:54.456 16:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.456 16:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.456 16:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.456 16:16:14 -- accel/accel.sh@21 -- # val= 00:06:54.456 16:16:14 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.456 16:16:14 -- accel/accel.sh@20 -- # IFS=: 00:06:54.456 16:16:14 -- accel/accel.sh@20 -- # read -r var val 00:06:54.456 16:16:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.456 16:16:14 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:54.456 16:16:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.456 00:06:54.456 real 0m4.095s 00:06:54.456 user 0m3.644s 00:06:54.456 sys 0m0.236s 00:06:54.456 16:16:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:54.716 ************************************ 00:06:54.716 END TEST accel_xor 00:06:54.716 ************************************ 00:06:54.716 16:16:14 -- common/autotest_common.sh@10 -- # set +x 00:06:54.716 16:16:14 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:54.716 16:16:14 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:54.717 16:16:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:54.717 16:16:14 -- common/autotest_common.sh@10 -- # set +x 00:06:54.717 ************************************ 00:06:54.717 START TEST accel_dif_verify 00:06:54.717 ************************************ 00:06:54.717 16:16:14 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:06:54.717 16:16:14 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.717 16:16:14 -- accel/accel.sh@17 -- # local accel_module 00:06:54.717 16:16:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:54.717 16:16:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:54.717 16:16:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.717 16:16:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.717 16:16:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.717 16:16:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.717 16:16:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.717 16:16:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.717 16:16:14 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.717 16:16:14 -- accel/accel.sh@42 -- # jq -r . 00:06:54.717 [2024-11-09 16:16:14.325202] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:54.717 [2024-11-09 16:16:14.325326] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59152 ] 00:06:54.717 [2024-11-09 16:16:14.474550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.978 [2024-11-09 16:16:14.658772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.889 16:16:16 -- accel/accel.sh@18 -- # out=' 00:06:56.890 SPDK Configuration: 00:06:56.890 Core mask: 0x1 00:06:56.890 00:06:56.890 Accel Perf Configuration: 00:06:56.890 Workload Type: dif_verify 00:06:56.890 Vector size: 4096 bytes 00:06:56.890 Transfer size: 4096 bytes 00:06:56.890 Block size: 512 bytes 00:06:56.890 Metadata size: 8 bytes 00:06:56.890 Vector count 1 00:06:56.890 Module: software 00:06:56.890 Queue depth: 32 00:06:56.890 Allocate depth: 32 00:06:56.890 # threads/core: 1 00:06:56.890 Run time: 1 seconds 00:06:56.890 Verify: No 00:06:56.890 00:06:56.890 Running for 1 seconds... 00:06:56.890 00:06:56.890 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:56.890 ------------------------------------------------------------------------------------ 00:06:56.890 0,0 97760/s 387 MiB/s 0 0 00:06:56.890 ==================================================================================== 00:06:56.890 Total 97760/s 381 MiB/s 0 0' 00:06:56.890 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:56.890 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:56.890 16:16:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:56.890 16:16:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:56.890 16:16:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.890 16:16:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.890 16:16:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.890 16:16:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.890 16:16:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.890 16:16:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.890 16:16:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.890 16:16:16 -- accel/accel.sh@42 -- # jq -r . 00:06:56.890 [2024-11-09 16:16:16.444814] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:56.890 [2024-11-09 16:16:16.444922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ] 00:06:56.890 [2024-11-09 16:16:16.592039] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.151 [2024-11-09 16:16:16.774823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val=0x1 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val=dif_verify 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val=software 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val=32 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val=32 00:06:57.151 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.151 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.151 16:16:16 -- accel/accel.sh@21 -- # val=1 00:06:57.152 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.152 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.152 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.152 16:16:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.152 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.152 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.152 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.152 16:16:16 -- accel/accel.sh@21 -- # val=No 00:06:57.152 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.152 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.152 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.152 16:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.152 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.152 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:57.413 16:16:16 -- accel/accel.sh@21 -- # val= 00:06:57.413 16:16:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.413 16:16:16 -- accel/accel.sh@20 -- # IFS=: 00:06:57.413 16:16:16 -- accel/accel.sh@20 -- # read -r var val 00:06:58.798 16:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.798 16:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.798 16:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.798 16:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.798 16:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.798 16:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.798 16:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.798 16:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.798 16:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.798 16:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.798 16:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.798 16:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.798 16:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.798 16:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.798 16:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.798 16:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.798 16:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.798 16:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.798 16:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.798 16:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.798 16:16:18 -- accel/accel.sh@21 -- # val= 00:06:58.798 16:16:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.798 16:16:18 -- accel/accel.sh@20 -- # IFS=: 00:06:58.798 16:16:18 -- accel/accel.sh@20 -- # read -r var val 00:06:58.798 16:16:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:58.798 16:16:18 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:58.798 16:16:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.798 00:06:58.798 real 0m4.239s 00:06:58.798 user 0m3.784s 00:06:58.798 sys 0m0.240s 00:06:58.798 16:16:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:58.798 ************************************ 00:06:58.798 END TEST accel_dif_verify 00:06:58.798 ************************************ 00:06:58.798 16:16:18 -- common/autotest_common.sh@10 -- # set +x 00:06:59.059 16:16:18 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:59.059 16:16:18 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:59.059 16:16:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:59.059 16:16:18 -- common/autotest_common.sh@10 -- # set +x 00:06:59.059 ************************************ 00:06:59.059 START TEST accel_dif_generate 00:06:59.059 ************************************ 00:06:59.059 16:16:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:06:59.059 16:16:18 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.059 16:16:18 -- accel/accel.sh@17 -- # local accel_module 00:06:59.059 16:16:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:59.059 16:16:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:59.059 16:16:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.059 16:16:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.059 16:16:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.059 16:16:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.059 16:16:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.059 16:16:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.059 16:16:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.059 16:16:18 -- accel/accel.sh@42 -- # jq -r . 00:06:59.059 [2024-11-09 16:16:18.629869] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:06:59.059 [2024-11-09 16:16:18.629977] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59230 ] 00:06:59.059 [2024-11-09 16:16:18.779396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.325 [2024-11-09 16:16:19.000017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.259 16:16:20 -- accel/accel.sh@18 -- # out=' 00:07:01.259 SPDK Configuration: 00:07:01.259 Core mask: 0x1 00:07:01.259 00:07:01.259 Accel Perf Configuration: 00:07:01.259 Workload Type: dif_generate 00:07:01.259 Vector size: 4096 bytes 00:07:01.259 Transfer size: 4096 bytes 00:07:01.259 Block size: 512 bytes 00:07:01.259 Metadata size: 8 bytes 00:07:01.259 Vector count 1 00:07:01.259 Module: software 00:07:01.259 Queue depth: 32 00:07:01.259 Allocate depth: 32 00:07:01.259 # threads/core: 1 00:07:01.259 Run time: 1 seconds 00:07:01.259 Verify: No 00:07:01.259 00:07:01.259 Running for 1 seconds... 00:07:01.259 00:07:01.259 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:01.259 ------------------------------------------------------------------------------------ 00:07:01.259 0,0 114144/s 452 MiB/s 0 0 00:07:01.259 ==================================================================================== 00:07:01.259 Total 114144/s 445 MiB/s 0 0' 00:07:01.259 16:16:20 -- accel/accel.sh@20 -- # IFS=: 00:07:01.259 16:16:20 -- accel/accel.sh@20 -- # read -r var val 00:07:01.259 16:16:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:01.259 16:16:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:01.259 16:16:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.259 16:16:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.259 16:16:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.259 16:16:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.259 16:16:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.259 16:16:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.259 16:16:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.259 16:16:20 -- accel/accel.sh@42 -- # jq -r . 00:07:01.259 [2024-11-09 16:16:20.822917] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:01.259 [2024-11-09 16:16:20.823140] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59256 ] 00:07:01.259 [2024-11-09 16:16:20.970948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.520 [2024-11-09 16:16:21.143654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.520 16:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.520 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.520 16:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.520 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.520 16:16:21 -- accel/accel.sh@21 -- # val=0x1 00:07:01.520 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.520 16:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.520 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.520 16:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.520 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.520 16:16:21 -- accel/accel.sh@21 -- # val=dif_generate 00:07:01.520 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.520 16:16:21 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.520 16:16:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.520 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.520 16:16:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:01.520 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.520 16:16:21 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:01.520 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.520 16:16:21 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:01.520 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.520 16:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.520 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.520 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.781 16:16:21 -- accel/accel.sh@21 -- # val=software 00:07:01.781 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.781 16:16:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.781 16:16:21 -- accel/accel.sh@21 -- # val=32 00:07:01.781 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.781 16:16:21 -- accel/accel.sh@21 -- # val=32 00:07:01.781 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.781 16:16:21 -- accel/accel.sh@21 -- # val=1 00:07:01.781 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.781 16:16:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:01.781 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.781 16:16:21 -- accel/accel.sh@21 -- # val=No 00:07:01.781 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.781 16:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.781 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:01.781 16:16:21 -- accel/accel.sh@21 -- # val= 00:07:01.781 16:16:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # IFS=: 00:07:01.781 16:16:21 -- accel/accel.sh@20 -- # read -r var val 00:07:03.167 16:16:22 -- accel/accel.sh@21 -- # val= 00:07:03.167 16:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.167 16:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:03.167 16:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:03.167 16:16:22 -- accel/accel.sh@21 -- # val= 00:07:03.167 16:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.167 16:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:03.167 16:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:03.167 16:16:22 -- accel/accel.sh@21 -- # val= 00:07:03.167 16:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.167 16:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:03.167 16:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:03.167 16:16:22 -- accel/accel.sh@21 -- # val= 00:07:03.167 16:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.167 16:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:03.167 16:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:03.167 16:16:22 -- accel/accel.sh@21 -- # val= 00:07:03.167 16:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.167 16:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:03.167 16:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:03.167 16:16:22 -- accel/accel.sh@21 -- # val= 00:07:03.167 16:16:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.167 16:16:22 -- accel/accel.sh@20 -- # IFS=: 00:07:03.167 16:16:22 -- accel/accel.sh@20 -- # read -r var val 00:07:03.167 ************************************ 00:07:03.167 END TEST accel_dif_generate 00:07:03.167 ************************************ 00:07:03.167 16:16:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.167 16:16:22 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:03.167 16:16:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.167 00:07:03.167 real 0m4.287s 00:07:03.167 user 0m3.810s 00:07:03.167 sys 0m0.264s 00:07:03.167 16:16:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:03.167 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:07:03.167 16:16:22 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:03.167 16:16:22 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:03.167 16:16:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:03.167 16:16:22 -- common/autotest_common.sh@10 -- # set +x 00:07:03.425 ************************************ 00:07:03.425 START TEST accel_dif_generate_copy 00:07:03.425 ************************************ 00:07:03.425 16:16:22 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:07:03.425 16:16:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.425 16:16:22 -- accel/accel.sh@17 -- # local accel_module 00:07:03.425 16:16:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:03.425 16:16:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:03.425 16:16:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.425 16:16:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.425 16:16:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.425 16:16:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.425 16:16:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.425 16:16:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.425 16:16:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.425 16:16:22 -- accel/accel.sh@42 -- # jq -r . 00:07:03.425 [2024-11-09 16:16:22.973219] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:03.425 [2024-11-09 16:16:22.973332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59297 ] 00:07:03.425 [2024-11-09 16:16:23.124607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.683 [2024-11-09 16:16:23.292397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.585 16:16:24 -- accel/accel.sh@18 -- # out=' 00:07:05.585 SPDK Configuration: 00:07:05.585 Core mask: 0x1 00:07:05.585 00:07:05.585 Accel Perf Configuration: 00:07:05.585 Workload Type: dif_generate_copy 00:07:05.585 Vector size: 4096 bytes 00:07:05.585 Transfer size: 4096 bytes 00:07:05.585 Vector count 1 00:07:05.585 Module: software 00:07:05.585 Queue depth: 32 00:07:05.585 Allocate depth: 32 00:07:05.585 # threads/core: 1 00:07:05.585 Run time: 1 seconds 00:07:05.585 Verify: No 00:07:05.585 00:07:05.585 Running for 1 seconds... 00:07:05.585 00:07:05.585 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:05.585 ------------------------------------------------------------------------------------ 00:07:05.585 0,0 90720/s 359 MiB/s 0 0 00:07:05.585 ==================================================================================== 00:07:05.585 Total 90720/s 354 MiB/s 0 0' 00:07:05.585 16:16:24 -- accel/accel.sh@20 -- # IFS=: 00:07:05.585 16:16:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:05.585 16:16:24 -- accel/accel.sh@20 -- # read -r var val 00:07:05.585 16:16:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:05.585 16:16:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.585 16:16:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.585 16:16:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.585 16:16:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.585 16:16:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.585 16:16:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.585 16:16:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.585 16:16:24 -- accel/accel.sh@42 -- # jq -r . 00:07:05.585 [2024-11-09 16:16:25.014704] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:05.585 [2024-11-09 16:16:25.014966] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59323 ] 00:07:05.585 [2024-11-09 16:16:25.164034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.585 [2024-11-09 16:16:25.299724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val= 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val= 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val=0x1 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val= 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val= 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val= 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val=software 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val=32 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val=32 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val=1 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val=No 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val= 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:05.845 16:16:25 -- accel/accel.sh@21 -- # val= 00:07:05.845 16:16:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # IFS=: 00:07:05.845 16:16:25 -- accel/accel.sh@20 -- # read -r var val 00:07:07.251 16:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.251 16:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.251 16:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.251 16:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.251 16:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.251 16:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.251 16:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.251 16:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.251 16:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.251 16:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.251 16:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.251 16:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.251 16:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.251 16:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.251 16:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.251 16:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.251 16:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.251 16:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.251 16:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.251 16:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.251 16:16:26 -- accel/accel.sh@21 -- # val= 00:07:07.251 16:16:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.251 16:16:26 -- accel/accel.sh@20 -- # IFS=: 00:07:07.252 16:16:26 -- accel/accel.sh@20 -- # read -r var val 00:07:07.252 16:16:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.252 16:16:26 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:07.252 16:16:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.252 00:07:07.252 real 0m3.949s 00:07:07.252 user 0m3.506s 00:07:07.252 sys 0m0.238s 00:07:07.252 ************************************ 00:07:07.252 END TEST accel_dif_generate_copy 00:07:07.252 ************************************ 00:07:07.252 16:16:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:07.252 16:16:26 -- common/autotest_common.sh@10 -- # set +x 00:07:07.252 16:16:26 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:07.252 16:16:26 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.252 16:16:26 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:07.252 16:16:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:07.252 16:16:26 -- common/autotest_common.sh@10 -- # set +x 00:07:07.252 ************************************ 00:07:07.252 START TEST accel_comp 00:07:07.252 ************************************ 00:07:07.252 16:16:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.252 16:16:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.252 16:16:26 -- accel/accel.sh@17 -- # local accel_module 00:07:07.252 16:16:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.252 16:16:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:07.252 16:16:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.252 16:16:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.252 16:16:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.252 16:16:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.252 16:16:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.252 16:16:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.252 16:16:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.252 16:16:26 -- accel/accel.sh@42 -- # jq -r . 00:07:07.252 [2024-11-09 16:16:26.964187] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:07.252 [2024-11-09 16:16:26.964400] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59364 ] 00:07:07.510 [2024-11-09 16:16:27.110314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.510 [2024-11-09 16:16:27.247564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.411 16:16:28 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:09.411 00:07:09.411 SPDK Configuration: 00:07:09.411 Core mask: 0x1 00:07:09.411 00:07:09.411 Accel Perf Configuration: 00:07:09.411 Workload Type: compress 00:07:09.411 Transfer size: 4096 bytes 00:07:09.411 Vector count 1 00:07:09.411 Module: software 00:07:09.411 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.411 Queue depth: 32 00:07:09.411 Allocate depth: 32 00:07:09.411 # threads/core: 1 00:07:09.411 Run time: 1 seconds 00:07:09.411 Verify: No 00:07:09.411 00:07:09.411 Running for 1 seconds... 00:07:09.411 00:07:09.411 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:09.411 ------------------------------------------------------------------------------------ 00:07:09.411 0,0 63936/s 266 MiB/s 0 0 00:07:09.411 ==================================================================================== 00:07:09.411 Total 63936/s 249 MiB/s 0 0' 00:07:09.411 16:16:28 -- accel/accel.sh@20 -- # IFS=: 00:07:09.411 16:16:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.411 16:16:28 -- accel/accel.sh@20 -- # read -r var val 00:07:09.411 16:16:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.411 16:16:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.412 16:16:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.412 16:16:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.412 16:16:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.412 16:16:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.412 16:16:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.412 16:16:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.412 16:16:28 -- accel/accel.sh@42 -- # jq -r . 00:07:09.412 [2024-11-09 16:16:28.865887] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:09.412 [2024-11-09 16:16:28.865964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59388 ] 00:07:09.412 [2024-11-09 16:16:29.005389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.412 [2024-11-09 16:16:29.141609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.670 16:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.670 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.670 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val=0x1 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val=compress 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val=software 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@23 -- # accel_module=software 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val=32 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val=32 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val=1 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val=No 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:09.671 16:16:29 -- accel/accel.sh@21 -- # val= 00:07:09.671 16:16:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # IFS=: 00:07:09.671 16:16:29 -- accel/accel.sh@20 -- # read -r var val 00:07:11.048 16:16:30 -- accel/accel.sh@21 -- # val= 00:07:11.048 16:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.048 16:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.048 16:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.048 16:16:30 -- accel/accel.sh@21 -- # val= 00:07:11.048 16:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.048 16:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.048 16:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.048 16:16:30 -- accel/accel.sh@21 -- # val= 00:07:11.048 16:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.048 16:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.048 16:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.048 16:16:30 -- accel/accel.sh@21 -- # val= 00:07:11.048 16:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.048 16:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.048 16:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.048 16:16:30 -- accel/accel.sh@21 -- # val= 00:07:11.048 16:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.048 16:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.048 16:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.048 16:16:30 -- accel/accel.sh@21 -- # val= 00:07:11.048 16:16:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.048 16:16:30 -- accel/accel.sh@20 -- # IFS=: 00:07:11.048 16:16:30 -- accel/accel.sh@20 -- # read -r var val 00:07:11.048 ************************************ 00:07:11.048 END TEST accel_comp 00:07:11.048 ************************************ 00:07:11.048 16:16:30 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.048 16:16:30 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:11.048 16:16:30 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.048 00:07:11.048 real 0m3.786s 00:07:11.048 user 0m3.357s 00:07:11.048 sys 0m0.230s 00:07:11.048 16:16:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:11.048 16:16:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.048 16:16:30 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.048 16:16:30 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:11.048 16:16:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:11.048 16:16:30 -- common/autotest_common.sh@10 -- # set +x 00:07:11.048 ************************************ 00:07:11.048 START TEST accel_decomp 00:07:11.048 ************************************ 00:07:11.048 16:16:30 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.048 16:16:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.048 16:16:30 -- accel/accel.sh@17 -- # local accel_module 00:07:11.048 16:16:30 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.048 16:16:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:11.048 16:16:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.048 16:16:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.048 16:16:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.048 16:16:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.048 16:16:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.048 16:16:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.048 16:16:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.048 16:16:30 -- accel/accel.sh@42 -- # jq -r . 00:07:11.048 [2024-11-09 16:16:30.794614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:11.048 [2024-11-09 16:16:30.794810] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59423 ] 00:07:11.306 [2024-11-09 16:16:30.944044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.564 [2024-11-09 16:16:31.086895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.967 16:16:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:12.967 00:07:12.967 SPDK Configuration: 00:07:12.967 Core mask: 0x1 00:07:12.967 00:07:12.967 Accel Perf Configuration: 00:07:12.967 Workload Type: decompress 00:07:12.967 Transfer size: 4096 bytes 00:07:12.967 Vector count 1 00:07:12.967 Module: software 00:07:12.967 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:12.967 Queue depth: 32 00:07:12.967 Allocate depth: 32 00:07:12.967 # threads/core: 1 00:07:12.967 Run time: 1 seconds 00:07:12.967 Verify: Yes 00:07:12.967 00:07:12.967 Running for 1 seconds... 00:07:12.967 00:07:12.967 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.967 ------------------------------------------------------------------------------------ 00:07:12.967 0,0 81440/s 150 MiB/s 0 0 00:07:12.967 ==================================================================================== 00:07:12.967 Total 81440/s 318 MiB/s 0 0' 00:07:12.967 16:16:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:12.967 16:16:32 -- accel/accel.sh@20 -- # IFS=: 00:07:12.967 16:16:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:12.967 16:16:32 -- accel/accel.sh@20 -- # read -r var val 00:07:12.967 16:16:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.967 16:16:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.967 16:16:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.967 16:16:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.967 16:16:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.967 16:16:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.967 16:16:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.967 16:16:32 -- accel/accel.sh@42 -- # jq -r . 00:07:12.967 [2024-11-09 16:16:32.696218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:12.967 [2024-11-09 16:16:32.696332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59449 ] 00:07:13.225 [2024-11-09 16:16:32.839604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.226 [2024-11-09 16:16:32.980248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val=0x1 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val=decompress 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val=software 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val=32 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val=32 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val=1 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val=Yes 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:13.484 16:16:33 -- accel/accel.sh@21 -- # val= 00:07:13.484 16:16:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # IFS=: 00:07:13.484 16:16:33 -- accel/accel.sh@20 -- # read -r var val 00:07:14.857 16:16:34 -- accel/accel.sh@21 -- # val= 00:07:14.857 16:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.857 16:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:14.857 16:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.857 16:16:34 -- accel/accel.sh@21 -- # val= 00:07:14.857 16:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.857 16:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:14.857 16:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.857 16:16:34 -- accel/accel.sh@21 -- # val= 00:07:14.857 16:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.857 16:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:14.857 16:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.857 16:16:34 -- accel/accel.sh@21 -- # val= 00:07:14.857 16:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.857 16:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:14.857 16:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.857 16:16:34 -- accel/accel.sh@21 -- # val= 00:07:14.857 16:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.857 16:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:14.857 16:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.857 16:16:34 -- accel/accel.sh@21 -- # val= 00:07:14.857 16:16:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.857 16:16:34 -- accel/accel.sh@20 -- # IFS=: 00:07:14.857 16:16:34 -- accel/accel.sh@20 -- # read -r var val 00:07:14.857 ************************************ 00:07:14.857 END TEST accel_decomp 00:07:14.857 ************************************ 00:07:14.857 16:16:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.857 16:16:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:14.857 16:16:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.857 00:07:14.858 real 0m3.822s 00:07:14.858 user 0m3.393s 00:07:14.858 sys 0m0.223s 00:07:14.858 16:16:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:14.858 16:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.858 16:16:34 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:14.858 16:16:34 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:14.858 16:16:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:14.858 16:16:34 -- common/autotest_common.sh@10 -- # set +x 00:07:14.858 ************************************ 00:07:14.858 START TEST accel_decmop_full 00:07:14.858 ************************************ 00:07:14.858 16:16:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:14.858 16:16:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.858 16:16:34 -- accel/accel.sh@17 -- # local accel_module 00:07:14.858 16:16:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:14.858 16:16:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:14.858 16:16:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.858 16:16:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.858 16:16:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.858 16:16:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.858 16:16:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.858 16:16:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.858 16:16:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.858 16:16:34 -- accel/accel.sh@42 -- # jq -r . 00:07:15.116 [2024-11-09 16:16:34.653872] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:15.116 [2024-11-09 16:16:34.653957] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59490 ] 00:07:15.116 [2024-11-09 16:16:34.807442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.376 [2024-11-09 16:16:35.027654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.282 16:16:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:17.282 00:07:17.282 SPDK Configuration: 00:07:17.282 Core mask: 0x1 00:07:17.282 00:07:17.282 Accel Perf Configuration: 00:07:17.282 Workload Type: decompress 00:07:17.282 Transfer size: 111250 bytes 00:07:17.282 Vector count 1 00:07:17.282 Module: software 00:07:17.282 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:17.282 Queue depth: 32 00:07:17.282 Allocate depth: 32 00:07:17.282 # threads/core: 1 00:07:17.282 Run time: 1 seconds 00:07:17.282 Verify: Yes 00:07:17.282 00:07:17.282 Running for 1 seconds... 00:07:17.282 00:07:17.282 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:17.282 ------------------------------------------------------------------------------------ 00:07:17.282 0,0 4576/s 189 MiB/s 0 0 00:07:17.282 ==================================================================================== 00:07:17.282 Total 4576/s 485 MiB/s 0 0' 00:07:17.282 16:16:36 -- accel/accel.sh@20 -- # IFS=: 00:07:17.282 16:16:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:17.282 16:16:36 -- accel/accel.sh@20 -- # read -r var val 00:07:17.282 16:16:36 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:17.282 16:16:36 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.282 16:16:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.282 16:16:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.282 16:16:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.282 16:16:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.282 16:16:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.282 16:16:36 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.282 16:16:36 -- accel/accel.sh@42 -- # jq -r . 00:07:17.282 [2024-11-09 16:16:36.836933] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:17.282 [2024-11-09 16:16:36.837041] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59516 ] 00:07:17.282 [2024-11-09 16:16:36.983855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.543 [2024-11-09 16:16:37.170323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.801 16:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.801 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.801 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.801 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.801 16:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.801 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.801 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.801 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.801 16:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.801 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.801 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.801 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.801 16:16:37 -- accel/accel.sh@21 -- # val=0x1 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val=decompress 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val=software 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val=32 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val=32 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val=1 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val=Yes 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:17.802 16:16:37 -- accel/accel.sh@21 -- # val= 00:07:17.802 16:16:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # IFS=: 00:07:17.802 16:16:37 -- accel/accel.sh@20 -- # read -r var val 00:07:19.186 16:16:38 -- accel/accel.sh@21 -- # val= 00:07:19.186 16:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.186 16:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:19.186 16:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:19.186 16:16:38 -- accel/accel.sh@21 -- # val= 00:07:19.186 16:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.186 16:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:19.186 16:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:19.186 16:16:38 -- accel/accel.sh@21 -- # val= 00:07:19.186 16:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.186 16:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:19.186 16:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:19.186 16:16:38 -- accel/accel.sh@21 -- # val= 00:07:19.186 16:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.186 16:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:19.186 16:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:19.186 16:16:38 -- accel/accel.sh@21 -- # val= 00:07:19.186 16:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.186 16:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:19.186 16:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:19.186 16:16:38 -- accel/accel.sh@21 -- # val= 00:07:19.186 16:16:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.186 16:16:38 -- accel/accel.sh@20 -- # IFS=: 00:07:19.186 16:16:38 -- accel/accel.sh@20 -- # read -r var val 00:07:19.186 16:16:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:19.186 16:16:38 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:19.186 16:16:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.186 00:07:19.186 real 0m4.317s 00:07:19.186 user 0m1.941s 00:07:19.186 sys 0m0.145s 00:07:19.186 16:16:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.186 ************************************ 00:07:19.186 END TEST accel_decmop_full 00:07:19.186 ************************************ 00:07:19.186 16:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:19.446 16:16:38 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:19.446 16:16:38 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:19.446 16:16:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.446 16:16:38 -- common/autotest_common.sh@10 -- # set +x 00:07:19.446 ************************************ 00:07:19.446 START TEST accel_decomp_mcore 00:07:19.446 ************************************ 00:07:19.446 16:16:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:19.446 16:16:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.446 16:16:39 -- accel/accel.sh@17 -- # local accel_module 00:07:19.446 16:16:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:19.446 16:16:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:19.446 16:16:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.446 16:16:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.446 16:16:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.446 16:16:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.446 16:16:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.446 16:16:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.446 16:16:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.446 16:16:39 -- accel/accel.sh@42 -- # jq -r . 00:07:19.446 [2024-11-09 16:16:39.033212] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:19.446 [2024-11-09 16:16:39.033788] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59563 ] 00:07:19.446 [2024-11-09 16:16:39.179575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.706 [2024-11-09 16:16:39.364080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.706 [2024-11-09 16:16:39.364375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.706 [2024-11-09 16:16:39.365122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.706 [2024-11-09 16:16:39.365244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.613 16:16:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:21.613 00:07:21.613 SPDK Configuration: 00:07:21.613 Core mask: 0xf 00:07:21.613 00:07:21.613 Accel Perf Configuration: 00:07:21.613 Workload Type: decompress 00:07:21.613 Transfer size: 4096 bytes 00:07:21.613 Vector count 1 00:07:21.613 Module: software 00:07:21.613 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:21.613 Queue depth: 32 00:07:21.613 Allocate depth: 32 00:07:21.613 # threads/core: 1 00:07:21.613 Run time: 1 seconds 00:07:21.613 Verify: Yes 00:07:21.613 00:07:21.613 Running for 1 seconds... 00:07:21.613 00:07:21.613 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.613 ------------------------------------------------------------------------------------ 00:07:21.613 0,0 56928/s 104 MiB/s 0 0 00:07:21.613 3,0 55968/s 103 MiB/s 0 0 00:07:21.613 2,0 57440/s 105 MiB/s 0 0 00:07:21.613 1,0 56608/s 104 MiB/s 0 0 00:07:21.613 ==================================================================================== 00:07:21.613 Total 226944/s 886 MiB/s 0 0' 00:07:21.613 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:21.613 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:21.613 16:16:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:21.613 16:16:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:21.613 16:16:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.613 16:16:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.613 16:16:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.613 16:16:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.613 16:16:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.613 16:16:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.613 16:16:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.613 16:16:41 -- accel/accel.sh@42 -- # jq -r . 00:07:21.613 [2024-11-09 16:16:41.234866] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:21.613 [2024-11-09 16:16:41.234989] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59592 ] 00:07:21.875 [2024-11-09 16:16:41.387188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.875 [2024-11-09 16:16:41.567597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.875 [2024-11-09 16:16:41.567875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.875 [2024-11-09 16:16:41.568747] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.875 [2024-11-09 16:16:41.568821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val= 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val= 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val= 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val=0xf 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val= 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val= 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val=decompress 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val= 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val=software 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val=32 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val=32 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val=1 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val=Yes 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val= 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:22.137 16:16:41 -- accel/accel.sh@21 -- # val= 00:07:22.137 16:16:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # IFS=: 00:07:22.137 16:16:41 -- accel/accel.sh@20 -- # read -r var val 00:07:24.051 16:16:43 -- accel/accel.sh@21 -- # val= 00:07:24.051 16:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.052 16:16:43 -- accel/accel.sh@21 -- # val= 00:07:24.052 16:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.052 16:16:43 -- accel/accel.sh@21 -- # val= 00:07:24.052 16:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.052 16:16:43 -- accel/accel.sh@21 -- # val= 00:07:24.052 16:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.052 16:16:43 -- accel/accel.sh@21 -- # val= 00:07:24.052 16:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.052 16:16:43 -- accel/accel.sh@21 -- # val= 00:07:24.052 16:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.052 16:16:43 -- accel/accel.sh@21 -- # val= 00:07:24.052 16:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.052 16:16:43 -- accel/accel.sh@21 -- # val= 00:07:24.052 16:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.052 16:16:43 -- accel/accel.sh@21 -- # val= 00:07:24.052 16:16:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # IFS=: 00:07:24.052 16:16:43 -- accel/accel.sh@20 -- # read -r var val 00:07:24.052 16:16:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:24.052 16:16:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:24.052 16:16:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.052 00:07:24.052 real 0m4.340s 00:07:24.052 user 0m12.853s 00:07:24.052 sys 0m0.305s 00:07:24.052 ************************************ 00:07:24.052 END TEST accel_decomp_mcore 00:07:24.052 ************************************ 00:07:24.052 16:16:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:24.052 16:16:43 -- common/autotest_common.sh@10 -- # set +x 00:07:24.052 16:16:43 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.052 16:16:43 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:24.052 16:16:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:24.052 16:16:43 -- common/autotest_common.sh@10 -- # set +x 00:07:24.052 ************************************ 00:07:24.052 START TEST accel_decomp_full_mcore 00:07:24.052 ************************************ 00:07:24.052 16:16:43 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.052 16:16:43 -- accel/accel.sh@16 -- # local accel_opc 00:07:24.052 16:16:43 -- accel/accel.sh@17 -- # local accel_module 00:07:24.052 16:16:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.052 16:16:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.052 16:16:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.052 16:16:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.052 16:16:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.052 16:16:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.052 16:16:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.052 16:16:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.052 16:16:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.052 16:16:43 -- accel/accel.sh@42 -- # jq -r . 00:07:24.052 [2024-11-09 16:16:43.440201] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:24.052 [2024-11-09 16:16:43.440328] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59636 ] 00:07:24.052 [2024-11-09 16:16:43.588296] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.052 [2024-11-09 16:16:43.772015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.052 [2024-11-09 16:16:43.772296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.052 [2024-11-09 16:16:43.772732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.052 [2024-11-09 16:16:43.772836] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.968 16:16:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:25.968 00:07:25.968 SPDK Configuration: 00:07:25.968 Core mask: 0xf 00:07:25.968 00:07:25.968 Accel Perf Configuration: 00:07:25.968 Workload Type: decompress 00:07:25.968 Transfer size: 111250 bytes 00:07:25.968 Vector count 1 00:07:25.968 Module: software 00:07:25.968 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.968 Queue depth: 32 00:07:25.968 Allocate depth: 32 00:07:25.968 # threads/core: 1 00:07:25.968 Run time: 1 seconds 00:07:25.968 Verify: Yes 00:07:25.968 00:07:25.968 Running for 1 seconds... 00:07:25.968 00:07:25.968 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.968 ------------------------------------------------------------------------------------ 00:07:25.968 0,0 4288/s 177 MiB/s 0 0 00:07:25.968 3,0 4288/s 177 MiB/s 0 0 00:07:25.968 2,0 5568/s 230 MiB/s 0 0 00:07:25.968 1,0 4288/s 177 MiB/s 0 0 00:07:25.968 ==================================================================================== 00:07:25.968 Total 18432/s 1955 MiB/s 0 0' 00:07:25.968 16:16:45 -- accel/accel.sh@20 -- # IFS=: 00:07:25.968 16:16:45 -- accel/accel.sh@20 -- # read -r var val 00:07:25.968 16:16:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:25.968 16:16:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:25.968 16:16:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.968 16:16:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.968 16:16:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.968 16:16:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.968 16:16:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.968 16:16:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.968 16:16:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.968 16:16:45 -- accel/accel.sh@42 -- # jq -r . 00:07:25.968 [2024-11-09 16:16:45.619305] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:25.968 [2024-11-09 16:16:45.619415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59670 ] 00:07:26.228 [2024-11-09 16:16:45.765086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.228 [2024-11-09 16:16:45.949377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.228 [2024-11-09 16:16:45.949718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.228 [2024-11-09 16:16:45.950009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.228 [2024-11-09 16:16:45.950026] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val= 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val= 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val= 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val=0xf 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val= 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val= 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val=decompress 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val= 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val=software 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@23 -- # accel_module=software 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val=32 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val=32 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val=1 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val=Yes 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val= 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:26.490 16:16:46 -- accel/accel.sh@21 -- # val= 00:07:26.490 16:16:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # IFS=: 00:07:26.490 16:16:46 -- accel/accel.sh@20 -- # read -r var val 00:07:28.405 16:16:47 -- accel/accel.sh@21 -- # val= 00:07:28.405 16:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.405 16:16:47 -- accel/accel.sh@21 -- # val= 00:07:28.405 16:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.405 16:16:47 -- accel/accel.sh@21 -- # val= 00:07:28.405 16:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.405 16:16:47 -- accel/accel.sh@21 -- # val= 00:07:28.405 16:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.405 16:16:47 -- accel/accel.sh@21 -- # val= 00:07:28.405 16:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.405 16:16:47 -- accel/accel.sh@21 -- # val= 00:07:28.405 16:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.405 16:16:47 -- accel/accel.sh@21 -- # val= 00:07:28.405 16:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.405 16:16:47 -- accel/accel.sh@21 -- # val= 00:07:28.405 16:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.405 16:16:47 -- accel/accel.sh@21 -- # val= 00:07:28.405 16:16:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # IFS=: 00:07:28.405 16:16:47 -- accel/accel.sh@20 -- # read -r var val 00:07:28.405 16:16:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:28.405 16:16:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:28.405 16:16:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:28.405 ************************************ 00:07:28.405 END TEST accel_decomp_full_mcore 00:07:28.405 ************************************ 00:07:28.405 00:07:28.405 real 0m4.366s 00:07:28.405 user 0m12.939s 00:07:28.405 sys 0m0.319s 00:07:28.405 16:16:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.405 16:16:47 -- common/autotest_common.sh@10 -- # set +x 00:07:28.405 16:16:47 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:28.405 16:16:47 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:07:28.405 16:16:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.405 16:16:47 -- common/autotest_common.sh@10 -- # set +x 00:07:28.405 ************************************ 00:07:28.405 START TEST accel_decomp_mthread 00:07:28.405 ************************************ 00:07:28.405 16:16:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:28.405 16:16:47 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.406 16:16:47 -- accel/accel.sh@17 -- # local accel_module 00:07:28.406 16:16:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:28.406 16:16:47 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:28.406 16:16:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.406 16:16:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.406 16:16:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.406 16:16:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.406 16:16:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.406 16:16:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.406 16:16:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.406 16:16:47 -- accel/accel.sh@42 -- # jq -r . 00:07:28.406 [2024-11-09 16:16:47.872617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:28.406 [2024-11-09 16:16:47.872716] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59714 ] 00:07:28.406 [2024-11-09 16:16:48.013535] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.666 [2024-11-09 16:16:48.203950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.615 16:16:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.615 00:07:30.615 SPDK Configuration: 00:07:30.615 Core mask: 0x1 00:07:30.615 00:07:30.615 Accel Perf Configuration: 00:07:30.615 Workload Type: decompress 00:07:30.615 Transfer size: 4096 bytes 00:07:30.615 Vector count 1 00:07:30.615 Module: software 00:07:30.615 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.615 Queue depth: 32 00:07:30.615 Allocate depth: 32 00:07:30.615 # threads/core: 2 00:07:30.615 Run time: 1 seconds 00:07:30.615 Verify: Yes 00:07:30.615 00:07:30.615 Running for 1 seconds... 00:07:30.615 00:07:30.615 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.615 ------------------------------------------------------------------------------------ 00:07:30.615 0,1 30112/s 55 MiB/s 0 0 00:07:30.616 0,0 30016/s 55 MiB/s 0 0 00:07:30.616 ==================================================================================== 00:07:30.616 Total 60128/s 234 MiB/s 0 0' 00:07:30.616 16:16:49 -- accel/accel.sh@20 -- # IFS=: 00:07:30.616 16:16:49 -- accel/accel.sh@20 -- # read -r var val 00:07:30.616 16:16:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:30.616 16:16:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:30.616 16:16:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.616 16:16:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.616 16:16:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.616 16:16:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.616 16:16:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.616 16:16:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.616 16:16:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.616 16:16:49 -- accel/accel.sh@42 -- # jq -r . 00:07:30.616 [2024-11-09 16:16:50.000693] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:30.616 [2024-11-09 16:16:50.000911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59740 ] 00:07:30.616 [2024-11-09 16:16:50.147825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.616 [2024-11-09 16:16:50.325570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val= 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val= 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val= 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val=0x1 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val= 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val= 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val=decompress 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val= 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val=software 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val=32 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val=32 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val=2 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val=Yes 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val= 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:30.876 16:16:50 -- accel/accel.sh@21 -- # val= 00:07:30.876 16:16:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # IFS=: 00:07:30.876 16:16:50 -- accel/accel.sh@20 -- # read -r var val 00:07:32.786 16:16:52 -- accel/accel.sh@21 -- # val= 00:07:32.786 16:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.786 16:16:52 -- accel/accel.sh@21 -- # val= 00:07:32.786 16:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.786 16:16:52 -- accel/accel.sh@21 -- # val= 00:07:32.786 16:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.786 16:16:52 -- accel/accel.sh@21 -- # val= 00:07:32.786 16:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.786 16:16:52 -- accel/accel.sh@21 -- # val= 00:07:32.786 16:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.786 16:16:52 -- accel/accel.sh@21 -- # val= 00:07:32.786 16:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.786 16:16:52 -- accel/accel.sh@21 -- # val= 00:07:32.786 16:16:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # IFS=: 00:07:32.786 16:16:52 -- accel/accel.sh@20 -- # read -r var val 00:07:32.786 16:16:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.786 16:16:52 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:32.786 16:16:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.786 00:07:32.786 real 0m4.303s 00:07:32.786 user 0m3.846s 00:07:32.786 sys 0m0.248s 00:07:32.786 ************************************ 00:07:32.786 END TEST accel_decomp_mthread 00:07:32.786 ************************************ 00:07:32.786 16:16:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.786 16:16:52 -- common/autotest_common.sh@10 -- # set +x 00:07:32.786 16:16:52 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.786 16:16:52 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:32.786 16:16:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.786 16:16:52 -- common/autotest_common.sh@10 -- # set +x 00:07:32.786 ************************************ 00:07:32.786 START TEST accel_deomp_full_mthread 00:07:32.786 ************************************ 00:07:32.786 16:16:52 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.786 16:16:52 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.786 16:16:52 -- accel/accel.sh@17 -- # local accel_module 00:07:32.786 16:16:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.786 16:16:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.786 16:16:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.786 16:16:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.786 16:16:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.786 16:16:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.786 16:16:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.786 16:16:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.786 16:16:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.786 16:16:52 -- accel/accel.sh@42 -- # jq -r . 00:07:32.786 [2024-11-09 16:16:52.214480] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:32.786 [2024-11-09 16:16:52.215061] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59787 ] 00:07:32.786 [2024-11-09 16:16:52.366987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.046 [2024-11-09 16:16:52.576463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.958 16:16:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:34.958 00:07:34.958 SPDK Configuration: 00:07:34.958 Core mask: 0x1 00:07:34.958 00:07:34.958 Accel Perf Configuration: 00:07:34.958 Workload Type: decompress 00:07:34.958 Transfer size: 111250 bytes 00:07:34.958 Vector count 1 00:07:34.958 Module: software 00:07:34.958 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:34.958 Queue depth: 32 00:07:34.958 Allocate depth: 32 00:07:34.958 # threads/core: 2 00:07:34.958 Run time: 1 seconds 00:07:34.958 Verify: Yes 00:07:34.958 00:07:34.958 Running for 1 seconds... 00:07:34.958 00:07:34.958 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:34.958 ------------------------------------------------------------------------------------ 00:07:34.958 0,1 2112/s 87 MiB/s 0 0 00:07:34.958 0,0 2112/s 87 MiB/s 0 0 00:07:34.958 ==================================================================================== 00:07:34.958 Total 4224/s 448 MiB/s 0 0' 00:07:34.958 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:34.958 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:34.958 16:16:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.958 16:16:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:34.958 16:16:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.958 16:16:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.958 16:16:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.958 16:16:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.958 16:16:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.958 16:16:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.958 16:16:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.958 16:16:54 -- accel/accel.sh@42 -- # jq -r . 00:07:34.958 [2024-11-09 16:16:54.438490] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:34.958 [2024-11-09 16:16:54.438600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59813 ] 00:07:34.958 [2024-11-09 16:16:54.586709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.219 [2024-11-09 16:16:54.761117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val= 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val= 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val= 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val=0x1 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val= 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val= 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val=decompress 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val= 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val=software 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val=32 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val=32 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val=2 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val=Yes 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val= 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:35.219 16:16:54 -- accel/accel.sh@21 -- # val= 00:07:35.219 16:16:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # IFS=: 00:07:35.219 16:16:54 -- accel/accel.sh@20 -- # read -r var val 00:07:37.178 16:16:56 -- accel/accel.sh@21 -- # val= 00:07:37.178 16:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.178 16:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.178 16:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.178 16:16:56 -- accel/accel.sh@21 -- # val= 00:07:37.179 16:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.179 16:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.179 16:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.179 16:16:56 -- accel/accel.sh@21 -- # val= 00:07:37.179 16:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.179 16:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.179 16:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.179 16:16:56 -- accel/accel.sh@21 -- # val= 00:07:37.179 16:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.179 16:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.179 16:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.179 16:16:56 -- accel/accel.sh@21 -- # val= 00:07:37.179 16:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.179 16:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.179 16:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.179 16:16:56 -- accel/accel.sh@21 -- # val= 00:07:37.179 16:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.179 16:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.179 16:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.179 16:16:56 -- accel/accel.sh@21 -- # val= 00:07:37.179 16:16:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.179 16:16:56 -- accel/accel.sh@20 -- # IFS=: 00:07:37.179 16:16:56 -- accel/accel.sh@20 -- # read -r var val 00:07:37.179 ************************************ 00:07:37.179 END TEST accel_deomp_full_mthread 00:07:37.179 ************************************ 00:07:37.179 16:16:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.179 16:16:56 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:37.179 16:16:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.179 00:07:37.179 real 0m4.360s 00:07:37.179 user 0m3.887s 00:07:37.179 sys 0m0.263s 00:07:37.179 16:16:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.179 16:16:56 -- common/autotest_common.sh@10 -- # set +x 00:07:37.179 16:16:56 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:37.179 16:16:56 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:37.179 16:16:56 -- accel/accel.sh@129 -- # build_accel_config 00:07:37.179 16:16:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.179 16:16:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:07:37.179 16:16:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.179 16:16:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.179 16:16:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.179 16:16:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.179 16:16:56 -- common/autotest_common.sh@10 -- # set +x 00:07:37.179 16:16:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.179 16:16:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.179 16:16:56 -- accel/accel.sh@42 -- # jq -r . 00:07:37.179 ************************************ 00:07:37.179 START TEST accel_dif_functional_tests 00:07:37.179 ************************************ 00:07:37.179 16:16:56 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:37.179 [2024-11-09 16:16:56.658372] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:37.179 [2024-11-09 16:16:56.658480] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59855 ] 00:07:37.179 [2024-11-09 16:16:56.804607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:37.441 [2024-11-09 16:16:56.981849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.441 [2024-11-09 16:16:56.982081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.441 [2024-11-09 16:16:56.982179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.441 00:07:37.441 00:07:37.441 CUnit - A unit testing framework for C - Version 2.1-3 00:07:37.441 http://cunit.sourceforge.net/ 00:07:37.441 00:07:37.441 00:07:37.441 Suite: accel_dif 00:07:37.441 Test: verify: DIF generated, GUARD check ...passed 00:07:37.441 Test: verify: DIF generated, APPTAG check ...passed 00:07:37.441 Test: verify: DIF generated, REFTAG check ...passed 00:07:37.441 Test: verify: DIF not generated, GUARD check ...[2024-11-09 16:16:57.204032] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:37.441 passed 00:07:37.441 Test: verify: DIF not generated, APPTAG check ...passed 00:07:37.441 Test: verify: DIF not generated, REFTAG check ...[2024-11-09 16:16:57.204490] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:37.441 [2024-11-09 16:16:57.204561] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:37.441 [2024-11-09 16:16:57.204590] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:37.441 [2024-11-09 16:16:57.204622] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:37.441 passed 00:07:37.441 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:37.441 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:37.441 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:37.441 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:37.441 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:37.441 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:37.441 Test: generate copy: DIF generated, GUARD check ...passed[2024-11-09 16:16:57.204643] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:37.441 [2024-11-09 16:16:57.204807] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:37.441 [2024-11-09 16:16:57.205381] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:37.441 00:07:37.441 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:37.441 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:37.441 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:37.441 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:37.441 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:37.441 Test: generate copy: iovecs-len validate ...[2024-11-09 16:16:57.205921] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned passed 00:07:37.441 Test: generate copy: buffer alignment validate ...passed 00:07:37.441 00:07:37.441 Run Summary: Type Total Ran Passed Failed Inactive 00:07:37.441 suites 1 1 n/a 0 0 00:07:37.441 tests 20 20 20 0 0 00:07:37.441 asserts 204 204 204 0 n/a 00:07:37.441 00:07:37.441 Elapsed time = 0.005 seconds 00:07:37.441 with block_size. 00:07:38.383 ************************************ 00:07:38.383 END TEST accel_dif_functional_tests 00:07:38.383 ************************************ 00:07:38.383 00:07:38.383 real 0m1.396s 00:07:38.383 user 0m2.588s 00:07:38.383 sys 0m0.171s 00:07:38.383 16:16:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.383 16:16:57 -- common/autotest_common.sh@10 -- # set +x 00:07:38.383 ************************************ 00:07:38.383 END TEST accel 00:07:38.383 00:07:38.383 real 1m30.362s 00:07:38.383 user 1m38.598s 00:07:38.383 sys 0m6.377s 00:07:38.383 16:16:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:38.383 16:16:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.383 ************************************ 00:07:38.383 16:16:58 -- spdk/autotest.sh@177 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:38.383 16:16:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:38.383 16:16:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.383 16:16:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.383 ************************************ 00:07:38.383 START TEST accel_rpc 00:07:38.383 ************************************ 00:07:38.383 16:16:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:38.645 * Looking for test storage... 00:07:38.645 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:38.645 16:16:58 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:38.645 16:16:58 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:38.645 16:16:58 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:38.645 16:16:58 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:38.645 16:16:58 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:38.645 16:16:58 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:38.645 16:16:58 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:38.645 16:16:58 -- scripts/common.sh@335 -- # IFS=.-: 00:07:38.645 16:16:58 -- scripts/common.sh@335 -- # read -ra ver1 00:07:38.645 16:16:58 -- scripts/common.sh@336 -- # IFS=.-: 00:07:38.645 16:16:58 -- scripts/common.sh@336 -- # read -ra ver2 00:07:38.645 16:16:58 -- scripts/common.sh@337 -- # local 'op=<' 00:07:38.645 16:16:58 -- scripts/common.sh@339 -- # ver1_l=2 00:07:38.645 16:16:58 -- scripts/common.sh@340 -- # ver2_l=1 00:07:38.645 16:16:58 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:38.645 16:16:58 -- scripts/common.sh@343 -- # case "$op" in 00:07:38.645 16:16:58 -- scripts/common.sh@344 -- # : 1 00:07:38.645 16:16:58 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:38.645 16:16:58 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:38.645 16:16:58 -- scripts/common.sh@364 -- # decimal 1 00:07:38.645 16:16:58 -- scripts/common.sh@352 -- # local d=1 00:07:38.645 16:16:58 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:38.645 16:16:58 -- scripts/common.sh@354 -- # echo 1 00:07:38.645 16:16:58 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:38.645 16:16:58 -- scripts/common.sh@365 -- # decimal 2 00:07:38.645 16:16:58 -- scripts/common.sh@352 -- # local d=2 00:07:38.645 16:16:58 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:38.645 16:16:58 -- scripts/common.sh@354 -- # echo 2 00:07:38.645 16:16:58 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:38.645 16:16:58 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:38.645 16:16:58 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:38.645 16:16:58 -- scripts/common.sh@367 -- # return 0 00:07:38.645 16:16:58 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:38.645 16:16:58 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:38.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.645 --rc genhtml_branch_coverage=1 00:07:38.645 --rc genhtml_function_coverage=1 00:07:38.645 --rc genhtml_legend=1 00:07:38.645 --rc geninfo_all_blocks=1 00:07:38.645 --rc geninfo_unexecuted_blocks=1 00:07:38.645 00:07:38.645 ' 00:07:38.645 16:16:58 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:38.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.645 --rc genhtml_branch_coverage=1 00:07:38.645 --rc genhtml_function_coverage=1 00:07:38.645 --rc genhtml_legend=1 00:07:38.645 --rc geninfo_all_blocks=1 00:07:38.645 --rc geninfo_unexecuted_blocks=1 00:07:38.645 00:07:38.645 ' 00:07:38.645 16:16:58 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:38.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.645 --rc genhtml_branch_coverage=1 00:07:38.645 --rc genhtml_function_coverage=1 00:07:38.645 --rc genhtml_legend=1 00:07:38.645 --rc geninfo_all_blocks=1 00:07:38.645 --rc geninfo_unexecuted_blocks=1 00:07:38.645 00:07:38.645 ' 00:07:38.645 16:16:58 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:38.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:38.645 --rc genhtml_branch_coverage=1 00:07:38.645 --rc genhtml_function_coverage=1 00:07:38.645 --rc genhtml_legend=1 00:07:38.645 --rc geninfo_all_blocks=1 00:07:38.645 --rc geninfo_unexecuted_blocks=1 00:07:38.645 00:07:38.645 ' 00:07:38.645 16:16:58 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:38.645 16:16:58 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=59944 00:07:38.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.645 16:16:58 -- accel/accel_rpc.sh@15 -- # waitforlisten 59944 00:07:38.645 16:16:58 -- common/autotest_common.sh@829 -- # '[' -z 59944 ']' 00:07:38.645 16:16:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.645 16:16:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:38.645 16:16:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.645 16:16:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:38.645 16:16:58 -- common/autotest_common.sh@10 -- # set +x 00:07:38.645 16:16:58 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:38.645 [2024-11-09 16:16:58.344790] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:38.645 [2024-11-09 16:16:58.345132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59944 ] 00:07:38.905 [2024-11-09 16:16:58.506548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.906 [2024-11-09 16:16:58.675346] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:38.906 [2024-11-09 16:16:58.675533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.473 16:16:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.473 16:16:59 -- common/autotest_common.sh@862 -- # return 0 00:07:39.473 16:16:59 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:39.473 16:16:59 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:39.473 16:16:59 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:39.473 16:16:59 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:39.473 16:16:59 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:39.473 16:16:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:39.473 16:16:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:39.473 16:16:59 -- common/autotest_common.sh@10 -- # set +x 00:07:39.473 ************************************ 00:07:39.473 START TEST accel_assign_opcode 00:07:39.473 ************************************ 00:07:39.473 16:16:59 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:07:39.473 16:16:59 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:39.473 16:16:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.473 16:16:59 -- common/autotest_common.sh@10 -- # set +x 00:07:39.473 [2024-11-09 16:16:59.184140] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:39.473 16:16:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.473 16:16:59 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:39.473 16:16:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.473 16:16:59 -- common/autotest_common.sh@10 -- # set +x 00:07:39.473 [2024-11-09 16:16:59.192103] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:39.473 16:16:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:39.473 16:16:59 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:39.473 16:16:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:39.473 16:16:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.041 16:16:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.041 16:16:59 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:40.041 16:16:59 -- accel/accel_rpc.sh@42 -- # grep software 00:07:40.041 16:16:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:40.041 16:16:59 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:40.041 16:16:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.041 16:16:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:40.041 software 00:07:40.041 00:07:40.041 real 0m0.575s 00:07:40.041 ************************************ 00:07:40.041 END TEST accel_assign_opcode 00:07:40.041 ************************************ 00:07:40.041 user 0m0.033s 00:07:40.041 sys 0m0.011s 00:07:40.041 16:16:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.041 16:16:59 -- common/autotest_common.sh@10 -- # set +x 00:07:40.041 16:16:59 -- accel/accel_rpc.sh@55 -- # killprocess 59944 00:07:40.042 16:16:59 -- common/autotest_common.sh@936 -- # '[' -z 59944 ']' 00:07:40.042 16:16:59 -- common/autotest_common.sh@940 -- # kill -0 59944 00:07:40.042 16:16:59 -- common/autotest_common.sh@941 -- # uname 00:07:40.042 16:16:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:40.042 16:16:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 59944 00:07:40.300 killing process with pid 59944 00:07:40.300 16:16:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:40.300 16:16:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:40.300 16:16:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 59944' 00:07:40.300 16:16:59 -- common/autotest_common.sh@955 -- # kill 59944 00:07:40.300 16:16:59 -- common/autotest_common.sh@960 -- # wait 59944 00:07:41.676 00:07:41.676 real 0m3.156s 00:07:41.676 user 0m3.151s 00:07:41.676 sys 0m0.404s 00:07:41.676 ************************************ 00:07:41.676 END TEST accel_rpc 00:07:41.676 ************************************ 00:07:41.676 16:17:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:41.676 16:17:01 -- common/autotest_common.sh@10 -- # set +x 00:07:41.676 16:17:01 -- spdk/autotest.sh@178 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:41.676 16:17:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:41.676 16:17:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:41.676 16:17:01 -- common/autotest_common.sh@10 -- # set +x 00:07:41.676 ************************************ 00:07:41.676 START TEST app_cmdline 00:07:41.676 ************************************ 00:07:41.676 16:17:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:41.676 * Looking for test storage... 00:07:41.676 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:41.676 16:17:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:41.676 16:17:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:41.676 16:17:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:41.676 16:17:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:41.676 16:17:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:41.676 16:17:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:41.676 16:17:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:41.676 16:17:01 -- scripts/common.sh@335 -- # IFS=.-: 00:07:41.676 16:17:01 -- scripts/common.sh@335 -- # read -ra ver1 00:07:41.676 16:17:01 -- scripts/common.sh@336 -- # IFS=.-: 00:07:41.676 16:17:01 -- scripts/common.sh@336 -- # read -ra ver2 00:07:41.676 16:17:01 -- scripts/common.sh@337 -- # local 'op=<' 00:07:41.676 16:17:01 -- scripts/common.sh@339 -- # ver1_l=2 00:07:41.676 16:17:01 -- scripts/common.sh@340 -- # ver2_l=1 00:07:41.676 16:17:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:41.676 16:17:01 -- scripts/common.sh@343 -- # case "$op" in 00:07:41.676 16:17:01 -- scripts/common.sh@344 -- # : 1 00:07:41.676 16:17:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:41.676 16:17:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:41.676 16:17:01 -- scripts/common.sh@364 -- # decimal 1 00:07:41.676 16:17:01 -- scripts/common.sh@352 -- # local d=1 00:07:41.676 16:17:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:41.676 16:17:01 -- scripts/common.sh@354 -- # echo 1 00:07:41.676 16:17:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:41.676 16:17:01 -- scripts/common.sh@365 -- # decimal 2 00:07:41.676 16:17:01 -- scripts/common.sh@352 -- # local d=2 00:07:41.676 16:17:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:41.676 16:17:01 -- scripts/common.sh@354 -- # echo 2 00:07:41.676 16:17:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:41.676 16:17:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:41.676 16:17:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:41.676 16:17:01 -- scripts/common.sh@367 -- # return 0 00:07:41.676 16:17:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.676 16:17:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:41.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.676 --rc genhtml_branch_coverage=1 00:07:41.676 --rc genhtml_function_coverage=1 00:07:41.676 --rc genhtml_legend=1 00:07:41.676 --rc geninfo_all_blocks=1 00:07:41.676 --rc geninfo_unexecuted_blocks=1 00:07:41.676 00:07:41.676 ' 00:07:41.676 16:17:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:41.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.676 --rc genhtml_branch_coverage=1 00:07:41.676 --rc genhtml_function_coverage=1 00:07:41.676 --rc genhtml_legend=1 00:07:41.676 --rc geninfo_all_blocks=1 00:07:41.676 --rc geninfo_unexecuted_blocks=1 00:07:41.676 00:07:41.676 ' 00:07:41.676 16:17:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:41.676 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.676 --rc genhtml_branch_coverage=1 00:07:41.676 --rc genhtml_function_coverage=1 00:07:41.676 --rc genhtml_legend=1 00:07:41.676 --rc geninfo_all_blocks=1 00:07:41.676 --rc geninfo_unexecuted_blocks=1 00:07:41.676 00:07:41.676 ' 00:07:41.676 16:17:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:41.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.677 --rc genhtml_branch_coverage=1 00:07:41.677 --rc genhtml_function_coverage=1 00:07:41.677 --rc genhtml_legend=1 00:07:41.677 --rc geninfo_all_blocks=1 00:07:41.677 --rc geninfo_unexecuted_blocks=1 00:07:41.677 00:07:41.677 ' 00:07:41.677 16:17:01 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:41.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.677 16:17:01 -- app/cmdline.sh@17 -- # spdk_tgt_pid=60056 00:07:41.677 16:17:01 -- app/cmdline.sh@18 -- # waitforlisten 60056 00:07:41.677 16:17:01 -- common/autotest_common.sh@829 -- # '[' -z 60056 ']' 00:07:41.677 16:17:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.677 16:17:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:41.677 16:17:01 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:41.677 16:17:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.677 16:17:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:41.677 16:17:01 -- common/autotest_common.sh@10 -- # set +x 00:07:41.935 [2024-11-09 16:17:01.502336] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:41.935 [2024-11-09 16:17:01.502598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:07:41.935 [2024-11-09 16:17:01.650832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.194 [2024-11-09 16:17:01.793550] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:42.194 [2024-11-09 16:17:01.793703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.760 16:17:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:42.760 16:17:02 -- common/autotest_common.sh@862 -- # return 0 00:07:42.760 16:17:02 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:42.760 { 00:07:42.760 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:07:42.760 "fields": { 00:07:42.760 "major": 24, 00:07:42.760 "minor": 1, 00:07:42.760 "patch": 1, 00:07:42.760 "suffix": "-pre", 00:07:42.760 "commit": "c13c99a5e" 00:07:42.760 } 00:07:42.760 } 00:07:42.760 16:17:02 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:42.760 16:17:02 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:42.760 16:17:02 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:42.760 16:17:02 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:42.760 16:17:02 -- app/cmdline.sh@26 -- # sort 00:07:42.760 16:17:02 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:42.760 16:17:02 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:42.760 16:17:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:42.760 16:17:02 -- common/autotest_common.sh@10 -- # set +x 00:07:42.760 16:17:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:42.760 16:17:02 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:42.760 16:17:02 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:42.760 16:17:02 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.760 16:17:02 -- common/autotest_common.sh@650 -- # local es=0 00:07:42.760 16:17:02 -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.760 16:17:02 -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.760 16:17:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.760 16:17:02 -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.760 16:17:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.760 16:17:02 -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.760 16:17:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:42.760 16:17:02 -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.760 16:17:02 -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:42.761 16:17:02 -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:43.019 request: 00:07:43.019 { 00:07:43.019 "method": "env_dpdk_get_mem_stats", 00:07:43.019 "req_id": 1 00:07:43.019 } 00:07:43.019 Got JSON-RPC error response 00:07:43.019 response: 00:07:43.019 { 00:07:43.019 "code": -32601, 00:07:43.019 "message": "Method not found" 00:07:43.019 } 00:07:43.019 16:17:02 -- common/autotest_common.sh@653 -- # es=1 00:07:43.019 16:17:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:43.019 16:17:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:43.019 16:17:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:43.019 16:17:02 -- app/cmdline.sh@1 -- # killprocess 60056 00:07:43.019 16:17:02 -- common/autotest_common.sh@936 -- # '[' -z 60056 ']' 00:07:43.019 16:17:02 -- common/autotest_common.sh@940 -- # kill -0 60056 00:07:43.019 16:17:02 -- common/autotest_common.sh@941 -- # uname 00:07:43.019 16:17:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:43.019 16:17:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60056 00:07:43.019 killing process with pid 60056 00:07:43.019 16:17:02 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:43.019 16:17:02 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:43.019 16:17:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60056' 00:07:43.019 16:17:02 -- common/autotest_common.sh@955 -- # kill 60056 00:07:43.019 16:17:02 -- common/autotest_common.sh@960 -- # wait 60056 00:07:44.397 ************************************ 00:07:44.397 END TEST app_cmdline 00:07:44.397 ************************************ 00:07:44.397 00:07:44.397 real 0m2.625s 00:07:44.397 user 0m2.893s 00:07:44.397 sys 0m0.401s 00:07:44.397 16:17:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.397 16:17:03 -- common/autotest_common.sh@10 -- # set +x 00:07:44.397 16:17:03 -- spdk/autotest.sh@179 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:44.397 16:17:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:44.397 16:17:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.397 16:17:03 -- common/autotest_common.sh@10 -- # set +x 00:07:44.397 ************************************ 00:07:44.397 START TEST version 00:07:44.397 ************************************ 00:07:44.397 16:17:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:44.397 * Looking for test storage... 00:07:44.397 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:44.397 16:17:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:44.397 16:17:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:44.397 16:17:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:44.397 16:17:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:44.397 16:17:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:44.397 16:17:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:44.397 16:17:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:44.397 16:17:04 -- scripts/common.sh@335 -- # IFS=.-: 00:07:44.397 16:17:04 -- scripts/common.sh@335 -- # read -ra ver1 00:07:44.397 16:17:04 -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.397 16:17:04 -- scripts/common.sh@336 -- # read -ra ver2 00:07:44.397 16:17:04 -- scripts/common.sh@337 -- # local 'op=<' 00:07:44.397 16:17:04 -- scripts/common.sh@339 -- # ver1_l=2 00:07:44.397 16:17:04 -- scripts/common.sh@340 -- # ver2_l=1 00:07:44.397 16:17:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:44.397 16:17:04 -- scripts/common.sh@343 -- # case "$op" in 00:07:44.397 16:17:04 -- scripts/common.sh@344 -- # : 1 00:07:44.397 16:17:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:44.397 16:17:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.397 16:17:04 -- scripts/common.sh@364 -- # decimal 1 00:07:44.397 16:17:04 -- scripts/common.sh@352 -- # local d=1 00:07:44.397 16:17:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.397 16:17:04 -- scripts/common.sh@354 -- # echo 1 00:07:44.397 16:17:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:44.397 16:17:04 -- scripts/common.sh@365 -- # decimal 2 00:07:44.397 16:17:04 -- scripts/common.sh@352 -- # local d=2 00:07:44.397 16:17:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.397 16:17:04 -- scripts/common.sh@354 -- # echo 2 00:07:44.397 16:17:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:44.397 16:17:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:44.397 16:17:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:44.397 16:17:04 -- scripts/common.sh@367 -- # return 0 00:07:44.397 16:17:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.397 16:17:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:44.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.397 --rc genhtml_branch_coverage=1 00:07:44.397 --rc genhtml_function_coverage=1 00:07:44.397 --rc genhtml_legend=1 00:07:44.397 --rc geninfo_all_blocks=1 00:07:44.397 --rc geninfo_unexecuted_blocks=1 00:07:44.397 00:07:44.397 ' 00:07:44.397 16:17:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:44.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.397 --rc genhtml_branch_coverage=1 00:07:44.397 --rc genhtml_function_coverage=1 00:07:44.397 --rc genhtml_legend=1 00:07:44.397 --rc geninfo_all_blocks=1 00:07:44.397 --rc geninfo_unexecuted_blocks=1 00:07:44.397 00:07:44.397 ' 00:07:44.397 16:17:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:44.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.397 --rc genhtml_branch_coverage=1 00:07:44.397 --rc genhtml_function_coverage=1 00:07:44.397 --rc genhtml_legend=1 00:07:44.397 --rc geninfo_all_blocks=1 00:07:44.397 --rc geninfo_unexecuted_blocks=1 00:07:44.397 00:07:44.397 ' 00:07:44.397 16:17:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:44.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.397 --rc genhtml_branch_coverage=1 00:07:44.397 --rc genhtml_function_coverage=1 00:07:44.397 --rc genhtml_legend=1 00:07:44.397 --rc geninfo_all_blocks=1 00:07:44.397 --rc geninfo_unexecuted_blocks=1 00:07:44.397 00:07:44.397 ' 00:07:44.397 16:17:04 -- app/version.sh@17 -- # get_header_version major 00:07:44.397 16:17:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:44.397 16:17:04 -- app/version.sh@14 -- # cut -f2 00:07:44.397 16:17:04 -- app/version.sh@14 -- # tr -d '"' 00:07:44.397 16:17:04 -- app/version.sh@17 -- # major=24 00:07:44.397 16:17:04 -- app/version.sh@18 -- # get_header_version minor 00:07:44.397 16:17:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:44.397 16:17:04 -- app/version.sh@14 -- # tr -d '"' 00:07:44.397 16:17:04 -- app/version.sh@14 -- # cut -f2 00:07:44.397 16:17:04 -- app/version.sh@18 -- # minor=1 00:07:44.397 16:17:04 -- app/version.sh@19 -- # get_header_version patch 00:07:44.397 16:17:04 -- app/version.sh@14 -- # tr -d '"' 00:07:44.397 16:17:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:44.397 16:17:04 -- app/version.sh@14 -- # cut -f2 00:07:44.397 16:17:04 -- app/version.sh@19 -- # patch=1 00:07:44.397 16:17:04 -- app/version.sh@20 -- # get_header_version suffix 00:07:44.397 16:17:04 -- app/version.sh@14 -- # cut -f2 00:07:44.397 16:17:04 -- app/version.sh@14 -- # tr -d '"' 00:07:44.397 16:17:04 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:44.397 16:17:04 -- app/version.sh@20 -- # suffix=-pre 00:07:44.397 16:17:04 -- app/version.sh@22 -- # version=24.1 00:07:44.397 16:17:04 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:44.397 16:17:04 -- app/version.sh@25 -- # version=24.1.1 00:07:44.397 16:17:04 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:44.397 16:17:04 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:44.397 16:17:04 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:44.397 16:17:04 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:44.397 16:17:04 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:44.397 ************************************ 00:07:44.397 END TEST version 00:07:44.397 ************************************ 00:07:44.397 00:07:44.397 real 0m0.187s 00:07:44.397 user 0m0.118s 00:07:44.397 sys 0m0.096s 00:07:44.397 16:17:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:44.397 16:17:04 -- common/autotest_common.sh@10 -- # set +x 00:07:44.656 16:17:04 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:07:44.656 16:17:04 -- spdk/autotest.sh@191 -- # uname -s 00:07:44.656 16:17:04 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:07:44.656 16:17:04 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:44.656 16:17:04 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:07:44.656 16:17:04 -- spdk/autotest.sh@204 -- # '[' 1 -eq 1 ']' 00:07:44.656 16:17:04 -- spdk/autotest.sh@205 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:44.656 16:17:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:44.656 16:17:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:44.656 16:17:04 -- common/autotest_common.sh@10 -- # set +x 00:07:44.656 ************************************ 00:07:44.656 START TEST blockdev_nvme 00:07:44.656 ************************************ 00:07:44.656 16:17:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:44.656 * Looking for test storage... 00:07:44.656 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:44.656 16:17:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:44.656 16:17:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:44.656 16:17:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:44.656 16:17:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:44.656 16:17:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:44.656 16:17:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:44.656 16:17:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:44.656 16:17:04 -- scripts/common.sh@335 -- # IFS=.-: 00:07:44.656 16:17:04 -- scripts/common.sh@335 -- # read -ra ver1 00:07:44.656 16:17:04 -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.657 16:17:04 -- scripts/common.sh@336 -- # read -ra ver2 00:07:44.657 16:17:04 -- scripts/common.sh@337 -- # local 'op=<' 00:07:44.657 16:17:04 -- scripts/common.sh@339 -- # ver1_l=2 00:07:44.657 16:17:04 -- scripts/common.sh@340 -- # ver2_l=1 00:07:44.657 16:17:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:44.657 16:17:04 -- scripts/common.sh@343 -- # case "$op" in 00:07:44.657 16:17:04 -- scripts/common.sh@344 -- # : 1 00:07:44.657 16:17:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:44.657 16:17:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.657 16:17:04 -- scripts/common.sh@364 -- # decimal 1 00:07:44.657 16:17:04 -- scripts/common.sh@352 -- # local d=1 00:07:44.657 16:17:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.657 16:17:04 -- scripts/common.sh@354 -- # echo 1 00:07:44.657 16:17:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:44.657 16:17:04 -- scripts/common.sh@365 -- # decimal 2 00:07:44.657 16:17:04 -- scripts/common.sh@352 -- # local d=2 00:07:44.657 16:17:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.657 16:17:04 -- scripts/common.sh@354 -- # echo 2 00:07:44.657 16:17:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:44.657 16:17:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:44.657 16:17:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:44.657 16:17:04 -- scripts/common.sh@367 -- # return 0 00:07:44.657 16:17:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.657 16:17:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:44.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.657 --rc genhtml_branch_coverage=1 00:07:44.657 --rc genhtml_function_coverage=1 00:07:44.657 --rc genhtml_legend=1 00:07:44.657 --rc geninfo_all_blocks=1 00:07:44.657 --rc geninfo_unexecuted_blocks=1 00:07:44.657 00:07:44.657 ' 00:07:44.657 16:17:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:44.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.657 --rc genhtml_branch_coverage=1 00:07:44.657 --rc genhtml_function_coverage=1 00:07:44.657 --rc genhtml_legend=1 00:07:44.657 --rc geninfo_all_blocks=1 00:07:44.657 --rc geninfo_unexecuted_blocks=1 00:07:44.657 00:07:44.657 ' 00:07:44.657 16:17:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:44.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.657 --rc genhtml_branch_coverage=1 00:07:44.657 --rc genhtml_function_coverage=1 00:07:44.657 --rc genhtml_legend=1 00:07:44.657 --rc geninfo_all_blocks=1 00:07:44.657 --rc geninfo_unexecuted_blocks=1 00:07:44.657 00:07:44.657 ' 00:07:44.657 16:17:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:44.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.657 --rc genhtml_branch_coverage=1 00:07:44.657 --rc genhtml_function_coverage=1 00:07:44.657 --rc genhtml_legend=1 00:07:44.657 --rc geninfo_all_blocks=1 00:07:44.657 --rc geninfo_unexecuted_blocks=1 00:07:44.657 00:07:44.657 ' 00:07:44.657 16:17:04 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:44.657 16:17:04 -- bdev/nbd_common.sh@6 -- # set -e 00:07:44.657 16:17:04 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:44.657 16:17:04 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:44.657 16:17:04 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:44.657 16:17:04 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:44.657 16:17:04 -- bdev/blockdev.sh@18 -- # : 00:07:44.657 16:17:04 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:44.657 16:17:04 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:44.657 16:17:04 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:44.657 16:17:04 -- bdev/blockdev.sh@672 -- # uname -s 00:07:44.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.657 16:17:04 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:44.657 16:17:04 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:44.657 16:17:04 -- bdev/blockdev.sh@680 -- # test_type=nvme 00:07:44.657 16:17:04 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:44.657 16:17:04 -- bdev/blockdev.sh@682 -- # dek= 00:07:44.657 16:17:04 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:44.657 16:17:04 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:44.657 16:17:04 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:44.657 16:17:04 -- bdev/blockdev.sh@688 -- # [[ nvme == bdev ]] 00:07:44.657 16:17:04 -- bdev/blockdev.sh@688 -- # [[ nvme == crypto_* ]] 00:07:44.657 16:17:04 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:44.657 16:17:04 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=60215 00:07:44.657 16:17:04 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:44.657 16:17:04 -- bdev/blockdev.sh@47 -- # waitforlisten 60215 00:07:44.657 16:17:04 -- common/autotest_common.sh@829 -- # '[' -z 60215 ']' 00:07:44.657 16:17:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.657 16:17:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.657 16:17:04 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:44.657 16:17:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.657 16:17:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.657 16:17:04 -- common/autotest_common.sh@10 -- # set +x 00:07:44.657 [2024-11-09 16:17:04.397055] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:44.657 [2024-11-09 16:17:04.397167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60215 ] 00:07:44.915 [2024-11-09 16:17:04.545541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.173 [2024-11-09 16:17:04.725069] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:45.173 [2024-11-09 16:17:04.725298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.552 16:17:05 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:46.552 16:17:05 -- common/autotest_common.sh@862 -- # return 0 00:07:46.552 16:17:05 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:46.552 16:17:05 -- bdev/blockdev.sh@697 -- # setup_nvme_conf 00:07:46.552 16:17:05 -- bdev/blockdev.sh@79 -- # local json 00:07:46.552 16:17:05 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:07:46.552 16:17:05 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:46.552 16:17:05 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:07:46.552 16:17:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.552 16:17:05 -- common/autotest_common.sh@10 -- # set +x 00:07:46.552 16:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.552 16:17:06 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:46.552 16:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.552 16:17:06 -- common/autotest_common.sh@10 -- # set +x 00:07:46.552 16:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.552 16:17:06 -- bdev/blockdev.sh@738 -- # cat 00:07:46.552 16:17:06 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:46.552 16:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.552 16:17:06 -- common/autotest_common.sh@10 -- # set +x 00:07:46.552 16:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.552 16:17:06 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:46.552 16:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.552 16:17:06 -- common/autotest_common.sh@10 -- # set +x 00:07:46.552 16:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.552 16:17:06 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:46.552 16:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.552 16:17:06 -- common/autotest_common.sh@10 -- # set +x 00:07:46.552 16:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.552 16:17:06 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:46.552 16:17:06 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:46.552 16:17:06 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:46.552 16:17:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:46.552 16:17:06 -- common/autotest_common.sh@10 -- # set +x 00:07:46.552 16:17:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:46.552 16:17:06 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:46.552 16:17:06 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "3737495f-17a9-41f4-abfa-d273cd3a0d62"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3737495f-17a9-41f4-abfa-d273cd3a0d62",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:06.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:06.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "5cf2951e-335e-4371-abcd-8898ea1e8be7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "5cf2951e-335e-4371-abcd-8898ea1e8be7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "10fe8187-7968-4ec6-91db-56a964b376e4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "10fe8187-7968-4ec6-91db-56a964b376e4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "d7e02796-13b4-4871-8fc1-f2aa27041b98"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d7e02796-13b4-4871-8fc1-f2aa27041b98",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "c73aeeee-8fe2-485e-9839-8988cfc5848f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c73aeeee-8fe2-485e-9839-8988cfc5848f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "a5f33933-f538-4842-af43-db24a52b4545"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a5f33933-f538-4842-af43-db24a52b4545",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:46.552 16:17:06 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:46.812 16:17:06 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:46.812 16:17:06 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1 00:07:46.812 16:17:06 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:46.812 16:17:06 -- bdev/blockdev.sh@752 -- # killprocess 60215 00:07:46.812 16:17:06 -- common/autotest_common.sh@936 -- # '[' -z 60215 ']' 00:07:46.812 16:17:06 -- common/autotest_common.sh@940 -- # kill -0 60215 00:07:46.812 16:17:06 -- common/autotest_common.sh@941 -- # uname 00:07:46.812 16:17:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:46.812 16:17:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60215 00:07:46.812 killing process with pid 60215 00:07:46.812 16:17:06 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:46.812 16:17:06 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:46.812 16:17:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60215' 00:07:46.812 16:17:06 -- common/autotest_common.sh@955 -- # kill 60215 00:07:46.812 16:17:06 -- common/autotest_common.sh@960 -- # wait 60215 00:07:48.201 16:17:07 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:48.201 16:17:07 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:48.201 16:17:07 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:48.201 16:17:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.201 16:17:07 -- common/autotest_common.sh@10 -- # set +x 00:07:48.201 ************************************ 00:07:48.201 START TEST bdev_hello_world 00:07:48.201 ************************************ 00:07:48.201 16:17:07 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:48.202 [2024-11-09 16:17:07.940190] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:48.202 [2024-11-09 16:17:07.940325] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60312 ] 00:07:48.461 [2024-11-09 16:17:08.086419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.720 [2024-11-09 16:17:08.269380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.287 [2024-11-09 16:17:08.800491] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:49.287 [2024-11-09 16:17:08.800681] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:49.287 [2024-11-09 16:17:08.800711] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:49.287 [2024-11-09 16:17:08.803179] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:49.287 [2024-11-09 16:17:08.803801] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:49.287 [2024-11-09 16:17:08.803828] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:49.287 [2024-11-09 16:17:08.804659] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:49.287 00:07:49.287 [2024-11-09 16:17:08.804716] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:49.858 00:07:49.858 real 0m1.735s 00:07:49.858 user 0m1.440s 00:07:49.858 sys 0m0.186s 00:07:49.858 ************************************ 00:07:49.858 END TEST bdev_hello_world 00:07:49.858 ************************************ 00:07:49.858 16:17:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:49.858 16:17:09 -- common/autotest_common.sh@10 -- # set +x 00:07:50.117 16:17:09 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:50.117 16:17:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:50.117 16:17:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.117 16:17:09 -- common/autotest_common.sh@10 -- # set +x 00:07:50.117 ************************************ 00:07:50.117 START TEST bdev_bounds 00:07:50.117 ************************************ 00:07:50.117 Process bdevio pid: 60354 00:07:50.117 16:17:09 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:07:50.117 16:17:09 -- bdev/blockdev.sh@288 -- # bdevio_pid=60354 00:07:50.117 16:17:09 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:50.117 16:17:09 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 60354' 00:07:50.117 16:17:09 -- bdev/blockdev.sh@291 -- # waitforlisten 60354 00:07:50.117 16:17:09 -- common/autotest_common.sh@829 -- # '[' -z 60354 ']' 00:07:50.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.117 16:17:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.117 16:17:09 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:50.117 16:17:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.117 16:17:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.117 16:17:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.117 16:17:09 -- common/autotest_common.sh@10 -- # set +x 00:07:50.117 [2024-11-09 16:17:09.769162] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:50.117 [2024-11-09 16:17:09.769343] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60354 ] 00:07:50.378 [2024-11-09 16:17:09.925982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.378 [2024-11-09 16:17:10.139453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.378 [2024-11-09 16:17:10.140062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.378 [2024-11-09 16:17:10.140163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.753 16:17:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.753 16:17:11 -- common/autotest_common.sh@862 -- # return 0 00:07:51.753 16:17:11 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:51.753 I/O targets: 00:07:51.753 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:51.753 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:51.753 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:51.753 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:51.753 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:51.753 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:51.753 00:07:51.753 00:07:51.753 CUnit - A unit testing framework for C - Version 2.1-3 00:07:51.753 http://cunit.sourceforge.net/ 00:07:51.753 00:07:51.753 00:07:51.753 Suite: bdevio tests on: Nvme3n1 00:07:51.753 Test: blockdev write read block ...passed 00:07:51.753 Test: blockdev write zeroes read block ...passed 00:07:51.753 Test: blockdev write zeroes read no split ...passed 00:07:51.753 Test: blockdev write zeroes read split ...passed 00:07:51.753 Test: blockdev write zeroes read split partial ...passed 00:07:51.753 Test: blockdev reset ...[2024-11-09 16:17:11.421532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:07:51.753 passed 00:07:51.753 Test: blockdev write read 8 blocks ...[2024-11-09 16:17:11.424284] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:51.753 passed 00:07:51.753 Test: blockdev write read size > 128k ...passed 00:07:51.753 Test: blockdev write read invalid size ...passed 00:07:51.753 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.753 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.753 Test: blockdev write read max offset ...passed 00:07:51.753 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.753 Test: blockdev writev readv 8 blocks ...passed 00:07:51.753 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.753 Test: blockdev writev readv block ...passed 00:07:51.753 Test: blockdev writev readv size > 128k ...passed 00:07:51.753 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.753 Test: blockdev comparev and writev ...[2024-11-09 16:17:11.430290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x270a0e000 len:0x1000 00:07:51.753 passed 00:07:51.753 Test: blockdev nvme passthru rw ...[2024-11-09 16:17:11.430518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.753 passed 00:07:51.753 Test: blockdev nvme passthru vendor specific ...[2024-11-09 16:17:11.431372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:51.753 passed 00:07:51.753 Test: blockdev nvme admin passthru ...passed 00:07:51.753 Test: blockdev copy ...[2024-11-09 16:17:11.431504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:51.753 passed 00:07:51.753 Suite: bdevio tests on: Nvme2n3 00:07:51.753 Test: blockdev write read block ...passed 00:07:51.753 Test: blockdev write zeroes read block ...passed 00:07:51.753 Test: blockdev write zeroes read no split ...passed 00:07:51.753 Test: blockdev write zeroes read split ...passed 00:07:51.753 Test: blockdev write zeroes read split partial ...passed 00:07:51.753 Test: blockdev reset ...[2024-11-09 16:17:11.470040] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:51.753 [2024-11-09 16:17:11.472587] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:51.753 passed 00:07:51.753 Test: blockdev write read 8 blocks ...passed 00:07:51.753 Test: blockdev write read size > 128k ...passed 00:07:51.753 Test: blockdev write read invalid size ...passed 00:07:51.753 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:51.753 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:51.753 Test: blockdev write read max offset ...passed 00:07:51.753 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:51.753 Test: blockdev writev readv 8 blocks ...passed 00:07:51.753 Test: blockdev writev readv 30 x 1block ...passed 00:07:51.753 Test: blockdev writev readv block ...passed 00:07:51.753 Test: blockdev writev readv size > 128k ...passed 00:07:51.753 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:51.753 Test: blockdev comparev and writev ...[2024-11-09 16:17:11.479169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x270a0a000 len:0x1000 00:07:51.753 [2024-11-09 16:17:11.479356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:51.753 passed 00:07:51.753 Test: blockdev nvme passthru rw ...passed 00:07:51.753 Test: blockdev nvme passthru vendor specific ...[2024-11-09 16:17:11.480208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:51.753 [2024-11-09 16:17:11.480352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:51.753 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:51.753 passed 00:07:51.753 Test: blockdev copy ...passed 00:07:51.753 Suite: bdevio tests on: Nvme2n2 00:07:51.753 Test: blockdev write read block ...passed 00:07:51.753 Test: blockdev write zeroes read block ...passed 00:07:51.753 Test: blockdev write zeroes read no split ...passed 00:07:51.753 Test: blockdev write zeroes read split ...passed 00:07:52.012 Test: blockdev write zeroes read split partial ...passed 00:07:52.012 Test: blockdev reset ...[2024-11-09 16:17:11.536583] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:52.012 [2024-11-09 16:17:11.539447] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.012 passed 00:07:52.012 Test: blockdev write read 8 blocks ...passed 00:07:52.012 Test: blockdev write read size > 128k ...passed 00:07:52.012 Test: blockdev write read invalid size ...passed 00:07:52.012 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.012 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.012 Test: blockdev write read max offset ...passed 00:07:52.012 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.012 Test: blockdev writev readv 8 blocks ...passed 00:07:52.012 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.012 Test: blockdev writev readv block ...passed 00:07:52.012 Test: blockdev writev readv size > 128k ...passed 00:07:52.012 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.012 Test: blockdev comparev and writev ...[2024-11-09 16:17:11.546621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 passed 00:07:52.012 Test: blockdev nvme passthru rw ...passed 00:07:52.012 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x252206000 len:0x1000 00:07:52.012 [2024-11-09 16:17:11.546777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:52.012 [2024-11-09 16:17:11.547291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:52.012 passed 00:07:52.012 Test: blockdev nvme admin passthru ...[2024-11-09 16:17:11.547354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:52.012 passed 00:07:52.012 Test: blockdev copy ...passed 00:07:52.012 Suite: bdevio tests on: Nvme2n1 00:07:52.012 Test: blockdev write read block ...passed 00:07:52.012 Test: blockdev write zeroes read block ...passed 00:07:52.012 Test: blockdev write zeroes read no split ...passed 00:07:52.012 Test: blockdev write zeroes read split ...passed 00:07:52.012 Test: blockdev write zeroes read split partial ...passed 00:07:52.012 Test: blockdev reset ...[2024-11-09 16:17:11.605761] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:07:52.012 [2024-11-09 16:17:11.608351] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.012 passed 00:07:52.012 Test: blockdev write read 8 blocks ...passed 00:07:52.012 Test: blockdev write read size > 128k ...passed 00:07:52.012 Test: blockdev write read invalid size ...passed 00:07:52.012 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.012 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.012 Test: blockdev write read max offset ...passed 00:07:52.012 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.012 Test: blockdev writev readv 8 blocks ...passed 00:07:52.012 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.012 Test: blockdev writev readv block ...passed 00:07:52.012 Test: blockdev writev readv size > 128k ...passed 00:07:52.012 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.012 Test: blockdev comparev and writev ...[2024-11-09 16:17:11.615263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x252201000 len:0x1000 00:07:52.012 [2024-11-09 16:17:11.615422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:52.012 passed 00:07:52.013 Test: blockdev nvme passthru rw ...passed 00:07:52.013 Test: blockdev nvme passthru vendor specific ...[2024-11-09 16:17:11.616466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:52.013 [2024-11-09 16:17:11.616567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:52.013 passed 00:07:52.013 Test: blockdev nvme admin passthru ...passed 00:07:52.013 Test: blockdev copy ...passed 00:07:52.013 Suite: bdevio tests on: Nvme1n1 00:07:52.013 Test: blockdev write read block ...passed 00:07:52.013 Test: blockdev write zeroes read block ...passed 00:07:52.013 Test: blockdev write zeroes read no split ...passed 00:07:52.013 Test: blockdev write zeroes read split ...passed 00:07:52.013 Test: blockdev write zeroes read split partial ...passed 00:07:52.013 Test: blockdev reset ...[2024-11-09 16:17:11.677091] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:07:52.013 passed 00:07:52.013 Test: blockdev write read 8 blocks ...[2024-11-09 16:17:11.679491] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.013 passed 00:07:52.013 Test: blockdev write read size > 128k ...passed 00:07:52.013 Test: blockdev write read invalid size ...passed 00:07:52.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.013 Test: blockdev write read max offset ...passed 00:07:52.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.013 Test: blockdev writev readv 8 blocks ...passed 00:07:52.013 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.013 Test: blockdev writev readv block ...passed 00:07:52.013 Test: blockdev writev readv size > 128k ...passed 00:07:52.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.013 Test: blockdev comparev and writev ...[2024-11-09 16:17:11.687377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x26b406000 len:0x1000 00:07:52.013 passed 00:07:52.013 Test: blockdev nvme passthru rw ...[2024-11-09 16:17:11.687915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:52.013 passed 00:07:52.013 Test: blockdev nvme passthru vendor specific ...[2024-11-09 16:17:11.689448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:52.013 passed 00:07:52.013 Test: blockdev nvme admin passthru ...[2024-11-09 16:17:11.689826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:52.013 passed 00:07:52.013 Test: blockdev copy ...passed 00:07:52.013 Suite: bdevio tests on: Nvme0n1 00:07:52.013 Test: blockdev write read block ...passed 00:07:52.013 Test: blockdev write zeroes read block ...passed 00:07:52.013 Test: blockdev write zeroes read no split ...passed 00:07:52.013 Test: blockdev write zeroes read split ...passed 00:07:52.013 Test: blockdev write zeroes read split partial ...passed 00:07:52.013 Test: blockdev reset ...[2024-11-09 16:17:11.744866] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:07:52.013 passed 00:07:52.013 Test: blockdev write read 8 blocks ...[2024-11-09 16:17:11.747547] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:52.013 passed 00:07:52.013 Test: blockdev write read size > 128k ...passed 00:07:52.013 Test: blockdev write read invalid size ...passed 00:07:52.013 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:52.013 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:52.013 Test: blockdev write read max offset ...passed 00:07:52.013 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:52.013 Test: blockdev writev readv 8 blocks ...passed 00:07:52.013 Test: blockdev writev readv 30 x 1block ...passed 00:07:52.013 Test: blockdev writev readv block ...passed 00:07:52.013 Test: blockdev writev readv size > 128k ...passed 00:07:52.013 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:52.013 Test: blockdev comparev and writev ...passed 00:07:52.013 Test: blockdev nvme passthru rw ...[2024-11-09 16:17:11.753141] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:52.013 separate metadata which is not supported yet. 00:07:52.013 passed 00:07:52.013 Test: blockdev nvme passthru vendor specific ...passed 00:07:52.013 Test: blockdev nvme admin passthru ...[2024-11-09 16:17:11.753669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:52.013 [2024-11-09 16:17:11.753711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:52.013 passed 00:07:52.013 Test: blockdev copy ...passed 00:07:52.013 00:07:52.013 Run Summary: Type Total Ran Passed Failed Inactive 00:07:52.013 suites 6 6 n/a 0 0 00:07:52.013 tests 138 138 138 0 0 00:07:52.013 asserts 893 893 893 0 n/a 00:07:52.013 00:07:52.013 Elapsed time = 0.988 seconds 00:07:52.013 0 00:07:52.013 16:17:11 -- bdev/blockdev.sh@293 -- # killprocess 60354 00:07:52.013 16:17:11 -- common/autotest_common.sh@936 -- # '[' -z 60354 ']' 00:07:52.013 16:17:11 -- common/autotest_common.sh@940 -- # kill -0 60354 00:07:52.013 16:17:11 -- common/autotest_common.sh@941 -- # uname 00:07:52.013 16:17:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:52.013 16:17:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60354 00:07:52.271 16:17:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:52.271 16:17:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:52.271 16:17:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60354' 00:07:52.271 killing process with pid 60354 00:07:52.271 16:17:11 -- common/autotest_common.sh@955 -- # kill 60354 00:07:52.271 16:17:11 -- common/autotest_common.sh@960 -- # wait 60354 00:07:52.841 16:17:12 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:52.841 00:07:52.841 real 0m2.718s 00:07:52.841 user 0m6.985s 00:07:52.841 sys 0m0.304s 00:07:52.841 16:17:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:52.841 16:17:12 -- common/autotest_common.sh@10 -- # set +x 00:07:52.841 ************************************ 00:07:52.841 END TEST bdev_bounds 00:07:52.841 ************************************ 00:07:52.842 16:17:12 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:52.842 16:17:12 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:07:52.842 16:17:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:52.842 16:17:12 -- common/autotest_common.sh@10 -- # set +x 00:07:52.842 ************************************ 00:07:52.842 START TEST bdev_nbd 00:07:52.842 ************************************ 00:07:52.842 16:17:12 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:52.842 16:17:12 -- bdev/blockdev.sh@298 -- # uname -s 00:07:52.842 16:17:12 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:52.842 16:17:12 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:52.842 16:17:12 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:52.842 16:17:12 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:52.842 16:17:12 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:52.842 16:17:12 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:07:52.842 16:17:12 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:52.842 16:17:12 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:52.842 16:17:12 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:52.842 16:17:12 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:07:52.842 16:17:12 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:52.842 16:17:12 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:52.842 16:17:12 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:52.842 16:17:12 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:52.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:52.842 16:17:12 -- bdev/blockdev.sh@316 -- # nbd_pid=60415 00:07:52.842 16:17:12 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:52.842 16:17:12 -- bdev/blockdev.sh@318 -- # waitforlisten 60415 /var/tmp/spdk-nbd.sock 00:07:52.842 16:17:12 -- common/autotest_common.sh@829 -- # '[' -z 60415 ']' 00:07:52.842 16:17:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:52.842 16:17:12 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:52.842 16:17:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:52.842 16:17:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:52.842 16:17:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:52.842 16:17:12 -- common/autotest_common.sh@10 -- # set +x 00:07:52.842 [2024-11-09 16:17:12.540108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:07:52.842 [2024-11-09 16:17:12.540236] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.102 [2024-11-09 16:17:12.690267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.364 [2024-11-09 16:17:12.878121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.304 16:17:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:54.304 16:17:14 -- common/autotest_common.sh@862 -- # return 0 00:07:54.304 16:17:14 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:54.304 16:17:14 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.304 16:17:14 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:54.304 16:17:14 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:54.304 16:17:14 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:54.304 16:17:14 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.304 16:17:14 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:54.304 16:17:14 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:54.304 16:17:14 -- bdev/nbd_common.sh@24 -- # local i 00:07:54.304 16:17:14 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:54.304 16:17:14 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:54.304 16:17:14 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:54.304 16:17:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:54.564 16:17:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:54.564 16:17:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:54.564 16:17:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:54.564 16:17:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:54.564 16:17:14 -- common/autotest_common.sh@867 -- # local i 00:07:54.564 16:17:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.564 16:17:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.564 16:17:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:54.564 16:17:14 -- common/autotest_common.sh@871 -- # break 00:07:54.564 16:17:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.564 16:17:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.564 16:17:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.564 1+0 records in 00:07:54.564 1+0 records out 00:07:54.564 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000944132 s, 4.3 MB/s 00:07:54.564 16:17:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.564 16:17:14 -- common/autotest_common.sh@884 -- # size=4096 00:07:54.564 16:17:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.564 16:17:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.564 16:17:14 -- common/autotest_common.sh@887 -- # return 0 00:07:54.564 16:17:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.564 16:17:14 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:54.564 16:17:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:54.823 16:17:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:54.823 16:17:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:54.823 16:17:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:54.823 16:17:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:54.823 16:17:14 -- common/autotest_common.sh@867 -- # local i 00:07:54.823 16:17:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:54.823 16:17:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:54.823 16:17:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:54.823 16:17:14 -- common/autotest_common.sh@871 -- # break 00:07:54.823 16:17:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:54.823 16:17:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:54.823 16:17:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.823 1+0 records in 00:07:54.823 1+0 records out 00:07:54.823 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000864073 s, 4.7 MB/s 00:07:54.823 16:17:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.823 16:17:14 -- common/autotest_common.sh@884 -- # size=4096 00:07:54.823 16:17:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:54.823 16:17:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:54.823 16:17:14 -- common/autotest_common.sh@887 -- # return 0 00:07:54.823 16:17:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:54.823 16:17:14 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:54.823 16:17:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:55.083 16:17:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:55.083 16:17:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:55.083 16:17:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:55.083 16:17:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:07:55.083 16:17:14 -- common/autotest_common.sh@867 -- # local i 00:07:55.083 16:17:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.083 16:17:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.083 16:17:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:07:55.083 16:17:14 -- common/autotest_common.sh@871 -- # break 00:07:55.083 16:17:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.083 16:17:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.083 16:17:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.083 1+0 records in 00:07:55.083 1+0 records out 00:07:55.083 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793752 s, 5.2 MB/s 00:07:55.083 16:17:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.083 16:17:14 -- common/autotest_common.sh@884 -- # size=4096 00:07:55.083 16:17:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.083 16:17:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.083 16:17:14 -- common/autotest_common.sh@887 -- # return 0 00:07:55.083 16:17:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.083 16:17:14 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:55.083 16:17:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:55.343 16:17:14 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:55.343 16:17:14 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:55.344 16:17:14 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:55.344 16:17:14 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:07:55.344 16:17:14 -- common/autotest_common.sh@867 -- # local i 00:07:55.344 16:17:14 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.344 16:17:14 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.344 16:17:14 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:07:55.344 16:17:14 -- common/autotest_common.sh@871 -- # break 00:07:55.344 16:17:14 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.344 16:17:14 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.344 16:17:14 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.344 1+0 records in 00:07:55.344 1+0 records out 00:07:55.344 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000995672 s, 4.1 MB/s 00:07:55.344 16:17:14 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.344 16:17:14 -- common/autotest_common.sh@884 -- # size=4096 00:07:55.344 16:17:14 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.344 16:17:14 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.344 16:17:14 -- common/autotest_common.sh@887 -- # return 0 00:07:55.344 16:17:14 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.344 16:17:14 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:55.344 16:17:14 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:55.603 16:17:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:55.603 16:17:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:55.603 16:17:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:55.603 16:17:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:07:55.603 16:17:15 -- common/autotest_common.sh@867 -- # local i 00:07:55.603 16:17:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.603 16:17:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.603 16:17:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:07:55.604 16:17:15 -- common/autotest_common.sh@871 -- # break 00:07:55.604 16:17:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.604 16:17:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.604 16:17:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.604 1+0 records in 00:07:55.604 1+0 records out 00:07:55.604 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111615 s, 3.7 MB/s 00:07:55.604 16:17:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.604 16:17:15 -- common/autotest_common.sh@884 -- # size=4096 00:07:55.604 16:17:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.604 16:17:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.604 16:17:15 -- common/autotest_common.sh@887 -- # return 0 00:07:55.604 16:17:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.604 16:17:15 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:55.604 16:17:15 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:55.864 16:17:15 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:55.864 16:17:15 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:55.864 16:17:15 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:55.864 16:17:15 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:07:55.864 16:17:15 -- common/autotest_common.sh@867 -- # local i 00:07:55.864 16:17:15 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:55.864 16:17:15 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:55.864 16:17:15 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:07:55.864 16:17:15 -- common/autotest_common.sh@871 -- # break 00:07:55.864 16:17:15 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:55.864 16:17:15 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:55.864 16:17:15 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:55.864 1+0 records in 00:07:55.864 1+0 records out 00:07:55.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116725 s, 3.5 MB/s 00:07:55.864 16:17:15 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.864 16:17:15 -- common/autotest_common.sh@884 -- # size=4096 00:07:55.864 16:17:15 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:55.864 16:17:15 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:55.864 16:17:15 -- common/autotest_common.sh@887 -- # return 0 00:07:55.864 16:17:15 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:55.864 16:17:15 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:55.864 16:17:15 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:55.864 16:17:15 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:55.864 { 00:07:55.864 "nbd_device": "/dev/nbd0", 00:07:55.864 "bdev_name": "Nvme0n1" 00:07:55.864 }, 00:07:55.864 { 00:07:55.864 "nbd_device": "/dev/nbd1", 00:07:55.864 "bdev_name": "Nvme1n1" 00:07:55.864 }, 00:07:55.864 { 00:07:55.864 "nbd_device": "/dev/nbd2", 00:07:55.864 "bdev_name": "Nvme2n1" 00:07:55.864 }, 00:07:55.864 { 00:07:55.864 "nbd_device": "/dev/nbd3", 00:07:55.864 "bdev_name": "Nvme2n2" 00:07:55.864 }, 00:07:55.864 { 00:07:55.864 "nbd_device": "/dev/nbd4", 00:07:55.864 "bdev_name": "Nvme2n3" 00:07:55.864 }, 00:07:55.864 { 00:07:55.864 "nbd_device": "/dev/nbd5", 00:07:55.864 "bdev_name": "Nvme3n1" 00:07:55.864 } 00:07:55.864 ]' 00:07:55.864 16:17:15 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:55.864 16:17:15 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:55.864 { 00:07:55.864 "nbd_device": "/dev/nbd0", 00:07:55.864 "bdev_name": "Nvme0n1" 00:07:55.864 }, 00:07:55.864 { 00:07:55.864 "nbd_device": "/dev/nbd1", 00:07:55.864 "bdev_name": "Nvme1n1" 00:07:55.864 }, 00:07:55.864 { 00:07:55.864 "nbd_device": "/dev/nbd2", 00:07:55.864 "bdev_name": "Nvme2n1" 00:07:55.864 }, 00:07:55.864 { 00:07:55.864 "nbd_device": "/dev/nbd3", 00:07:55.864 "bdev_name": "Nvme2n2" 00:07:55.864 }, 00:07:55.864 { 00:07:55.864 "nbd_device": "/dev/nbd4", 00:07:55.864 "bdev_name": "Nvme2n3" 00:07:55.864 }, 00:07:55.864 { 00:07:55.864 "nbd_device": "/dev/nbd5", 00:07:55.864 "bdev_name": "Nvme3n1" 00:07:55.864 } 00:07:55.864 ]' 00:07:55.864 16:17:15 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@51 -- # local i 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@41 -- # break 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.125 16:17:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:56.385 16:17:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:56.385 16:17:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:56.385 16:17:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:56.385 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.385 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.385 16:17:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:56.385 16:17:16 -- bdev/nbd_common.sh@41 -- # break 00:07:56.385 16:17:16 -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.385 16:17:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.385 16:17:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:56.678 16:17:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:56.678 16:17:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:56.678 16:17:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:56.678 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.678 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.678 16:17:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:56.678 16:17:16 -- bdev/nbd_common.sh@41 -- # break 00:07:56.678 16:17:16 -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.678 16:17:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.678 16:17:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:56.945 16:17:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:56.945 16:17:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:56.945 16:17:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:56.945 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:56.945 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:56.945 16:17:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:56.945 16:17:16 -- bdev/nbd_common.sh@41 -- # break 00:07:56.945 16:17:16 -- bdev/nbd_common.sh@45 -- # return 0 00:07:56.945 16:17:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.945 16:17:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:56.945 16:17:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@41 -- # break 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@41 -- # break 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.205 16:17:16 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@65 -- # true 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@65 -- # count=0 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@122 -- # count=0 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@127 -- # return 0 00:07:57.466 16:17:17 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@12 -- # local i 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:57.466 16:17:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:57.727 /dev/nbd0 00:07:57.727 16:17:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:57.727 16:17:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:57.727 16:17:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:57.727 16:17:17 -- common/autotest_common.sh@867 -- # local i 00:07:57.727 16:17:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:57.727 16:17:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:57.727 16:17:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:57.727 16:17:17 -- common/autotest_common.sh@871 -- # break 00:07:57.727 16:17:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:57.727 16:17:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:57.727 16:17:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.727 1+0 records in 00:07:57.727 1+0 records out 00:07:57.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00112989 s, 3.6 MB/s 00:07:57.727 16:17:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.727 16:17:17 -- common/autotest_common.sh@884 -- # size=4096 00:07:57.727 16:17:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.727 16:17:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:57.727 16:17:17 -- common/autotest_common.sh@887 -- # return 0 00:07:57.727 16:17:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.727 16:17:17 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:57.727 16:17:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:57.987 /dev/nbd1 00:07:57.987 16:17:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:57.987 16:17:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:57.987 16:17:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:57.987 16:17:17 -- common/autotest_common.sh@867 -- # local i 00:07:57.987 16:17:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:57.987 16:17:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:57.987 16:17:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:57.987 16:17:17 -- common/autotest_common.sh@871 -- # break 00:07:57.987 16:17:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:57.987 16:17:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:57.987 16:17:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.987 1+0 records in 00:07:57.987 1+0 records out 00:07:57.987 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000923888 s, 4.4 MB/s 00:07:57.987 16:17:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.987 16:17:17 -- common/autotest_common.sh@884 -- # size=4096 00:07:57.987 16:17:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.987 16:17:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:57.987 16:17:17 -- common/autotest_common.sh@887 -- # return 0 00:07:57.987 16:17:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:57.987 16:17:17 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:57.987 16:17:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:58.247 /dev/nbd10 00:07:58.247 16:17:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:58.247 16:17:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:58.247 16:17:17 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:07:58.247 16:17:17 -- common/autotest_common.sh@867 -- # local i 00:07:58.247 16:17:17 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.247 16:17:17 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.247 16:17:17 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:07:58.247 16:17:17 -- common/autotest_common.sh@871 -- # break 00:07:58.247 16:17:17 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.247 16:17:17 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.247 16:17:17 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.247 1+0 records in 00:07:58.247 1+0 records out 00:07:58.248 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101548 s, 4.0 MB/s 00:07:58.248 16:17:17 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.248 16:17:17 -- common/autotest_common.sh@884 -- # size=4096 00:07:58.248 16:17:17 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.248 16:17:17 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.248 16:17:17 -- common/autotest_common.sh@887 -- # return 0 00:07:58.248 16:17:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.248 16:17:17 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.248 16:17:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:58.507 /dev/nbd11 00:07:58.507 16:17:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:58.507 16:17:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:58.507 16:17:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:07:58.507 16:17:18 -- common/autotest_common.sh@867 -- # local i 00:07:58.507 16:17:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.507 16:17:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.507 16:17:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:07:58.507 16:17:18 -- common/autotest_common.sh@871 -- # break 00:07:58.507 16:17:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.507 16:17:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.507 16:17:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.507 1+0 records in 00:07:58.507 1+0 records out 00:07:58.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000762774 s, 5.4 MB/s 00:07:58.507 16:17:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.507 16:17:18 -- common/autotest_common.sh@884 -- # size=4096 00:07:58.507 16:17:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.507 16:17:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.507 16:17:18 -- common/autotest_common.sh@887 -- # return 0 00:07:58.507 16:17:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.507 16:17:18 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.507 16:17:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:58.768 /dev/nbd12 00:07:58.768 16:17:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:58.768 16:17:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:58.768 16:17:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:07:58.768 16:17:18 -- common/autotest_common.sh@867 -- # local i 00:07:58.768 16:17:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:58.768 16:17:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:58.768 16:17:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:07:58.768 16:17:18 -- common/autotest_common.sh@871 -- # break 00:07:58.768 16:17:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:58.768 16:17:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:58.768 16:17:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.768 1+0 records in 00:07:58.768 1+0 records out 00:07:58.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110263 s, 3.7 MB/s 00:07:58.768 16:17:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.768 16:17:18 -- common/autotest_common.sh@884 -- # size=4096 00:07:58.768 16:17:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.768 16:17:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:58.768 16:17:18 -- common/autotest_common.sh@887 -- # return 0 00:07:58.768 16:17:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:58.768 16:17:18 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:58.768 16:17:18 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:58.768 /dev/nbd13 00:07:59.030 16:17:18 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:59.030 16:17:18 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:59.030 16:17:18 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:07:59.030 16:17:18 -- common/autotest_common.sh@867 -- # local i 00:07:59.030 16:17:18 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:59.030 16:17:18 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:59.030 16:17:18 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:07:59.030 16:17:18 -- common/autotest_common.sh@871 -- # break 00:07:59.030 16:17:18 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:59.030 16:17:18 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:59.030 16:17:18 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.030 1+0 records in 00:07:59.031 1+0 records out 00:07:59.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00102441 s, 4.0 MB/s 00:07:59.031 16:17:18 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.031 16:17:18 -- common/autotest_common.sh@884 -- # size=4096 00:07:59.031 16:17:18 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.031 16:17:18 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:59.031 16:17:18 -- common/autotest_common.sh@887 -- # return 0 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:59.031 { 00:07:59.031 "nbd_device": "/dev/nbd0", 00:07:59.031 "bdev_name": "Nvme0n1" 00:07:59.031 }, 00:07:59.031 { 00:07:59.031 "nbd_device": "/dev/nbd1", 00:07:59.031 "bdev_name": "Nvme1n1" 00:07:59.031 }, 00:07:59.031 { 00:07:59.031 "nbd_device": "/dev/nbd10", 00:07:59.031 "bdev_name": "Nvme2n1" 00:07:59.031 }, 00:07:59.031 { 00:07:59.031 "nbd_device": "/dev/nbd11", 00:07:59.031 "bdev_name": "Nvme2n2" 00:07:59.031 }, 00:07:59.031 { 00:07:59.031 "nbd_device": "/dev/nbd12", 00:07:59.031 "bdev_name": "Nvme2n3" 00:07:59.031 }, 00:07:59.031 { 00:07:59.031 "nbd_device": "/dev/nbd13", 00:07:59.031 "bdev_name": "Nvme3n1" 00:07:59.031 } 00:07:59.031 ]' 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:59.031 { 00:07:59.031 "nbd_device": "/dev/nbd0", 00:07:59.031 "bdev_name": "Nvme0n1" 00:07:59.031 }, 00:07:59.031 { 00:07:59.031 "nbd_device": "/dev/nbd1", 00:07:59.031 "bdev_name": "Nvme1n1" 00:07:59.031 }, 00:07:59.031 { 00:07:59.031 "nbd_device": "/dev/nbd10", 00:07:59.031 "bdev_name": "Nvme2n1" 00:07:59.031 }, 00:07:59.031 { 00:07:59.031 "nbd_device": "/dev/nbd11", 00:07:59.031 "bdev_name": "Nvme2n2" 00:07:59.031 }, 00:07:59.031 { 00:07:59.031 "nbd_device": "/dev/nbd12", 00:07:59.031 "bdev_name": "Nvme2n3" 00:07:59.031 }, 00:07:59.031 { 00:07:59.031 "nbd_device": "/dev/nbd13", 00:07:59.031 "bdev_name": "Nvme3n1" 00:07:59.031 } 00:07:59.031 ]' 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:59.031 /dev/nbd1 00:07:59.031 /dev/nbd10 00:07:59.031 /dev/nbd11 00:07:59.031 /dev/nbd12 00:07:59.031 /dev/nbd13' 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:59.031 /dev/nbd1 00:07:59.031 /dev/nbd10 00:07:59.031 /dev/nbd11 00:07:59.031 /dev/nbd12 00:07:59.031 /dev/nbd13' 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@65 -- # count=6 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@66 -- # echo 6 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@95 -- # count=6 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:59.031 16:17:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:59.292 256+0 records in 00:07:59.292 256+0 records out 00:07:59.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00836797 s, 125 MB/s 00:07:59.292 16:17:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.292 16:17:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:59.292 256+0 records in 00:07:59.292 256+0 records out 00:07:59.292 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.221665 s, 4.7 MB/s 00:07:59.292 16:17:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.292 16:17:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:59.554 256+0 records in 00:07:59.554 256+0 records out 00:07:59.554 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225594 s, 4.6 MB/s 00:07:59.554 16:17:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.554 16:17:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:59.815 256+0 records in 00:07:59.815 256+0 records out 00:07:59.815 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.235039 s, 4.5 MB/s 00:07:59.815 16:17:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:59.815 16:17:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:00.076 256+0 records in 00:08:00.076 256+0 records out 00:08:00.076 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244638 s, 4.3 MB/s 00:08:00.076 16:17:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.076 16:17:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:00.337 256+0 records in 00:08:00.337 256+0 records out 00:08:00.337 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.243676 s, 4.3 MB/s 00:08:00.337 16:17:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:00.337 16:17:19 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:00.599 256+0 records in 00:08:00.599 256+0 records out 00:08:00.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.255944 s, 4.1 MB/s 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@51 -- # local i 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.599 16:17:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:00.861 16:17:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:00.861 16:17:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:00.861 16:17:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:00.861 16:17:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.861 16:17:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.861 16:17:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:00.861 16:17:20 -- bdev/nbd_common.sh@41 -- # break 00:08:00.861 16:17:20 -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.861 16:17:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.861 16:17:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:01.122 16:17:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:01.122 16:17:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:01.122 16:17:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:01.122 16:17:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.122 16:17:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.122 16:17:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:01.122 16:17:20 -- bdev/nbd_common.sh@41 -- # break 00:08:01.122 16:17:20 -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.122 16:17:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.122 16:17:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:01.385 16:17:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:01.385 16:17:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:01.385 16:17:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:01.385 16:17:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.385 16:17:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.385 16:17:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:01.385 16:17:20 -- bdev/nbd_common.sh@41 -- # break 00:08:01.385 16:17:20 -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.385 16:17:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.385 16:17:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@41 -- # break 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.647 16:17:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@41 -- # break 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@41 -- # break 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.908 16:17:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@65 -- # true 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@65 -- # count=0 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@104 -- # count=0 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@109 -- # return 0 00:08:02.169 16:17:21 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:02.169 16:17:21 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:02.430 malloc_lvol_verify 00:08:02.430 16:17:22 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:02.692 147b2478-cfbd-4910-bb6a-e90440d30600 00:08:02.692 16:17:22 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:02.956 4a65227a-527b-4bb0-ad95-64c722b7f297 00:08:02.956 16:17:22 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:02.956 /dev/nbd0 00:08:02.956 16:17:22 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:02.956 mke2fs 1.47.0 (5-Feb-2023) 00:08:02.956 Discarding device blocks: 0/4096 done 00:08:02.956 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:02.956 00:08:02.956 Allocating group tables: 0/1 done 00:08:02.956 Writing inode tables: 0/1 done 00:08:03.217 Creating journal (1024 blocks): done 00:08:03.217 Writing superblocks and filesystem accounting information: 0/1 done 00:08:03.217 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@51 -- # local i 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@41 -- # break 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@45 -- # return 0 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:03.217 16:17:22 -- bdev/nbd_common.sh@147 -- # return 0 00:08:03.217 16:17:22 -- bdev/blockdev.sh@324 -- # killprocess 60415 00:08:03.217 16:17:22 -- common/autotest_common.sh@936 -- # '[' -z 60415 ']' 00:08:03.217 16:17:22 -- common/autotest_common.sh@940 -- # kill -0 60415 00:08:03.217 16:17:22 -- common/autotest_common.sh@941 -- # uname 00:08:03.217 16:17:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:03.217 16:17:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60415 00:08:03.217 killing process with pid 60415 00:08:03.217 16:17:22 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:03.217 16:17:22 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:03.217 16:17:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60415' 00:08:03.217 16:17:22 -- common/autotest_common.sh@955 -- # kill 60415 00:08:03.217 16:17:22 -- common/autotest_common.sh@960 -- # wait 60415 00:08:04.604 16:17:23 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:04.604 00:08:04.604 real 0m11.475s 00:08:04.604 user 0m15.351s 00:08:04.604 sys 0m3.452s 00:08:04.604 ************************************ 00:08:04.604 END TEST bdev_nbd 00:08:04.604 ************************************ 00:08:04.604 16:17:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.604 16:17:23 -- common/autotest_common.sh@10 -- # set +x 00:08:04.604 16:17:24 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:04.604 16:17:24 -- bdev/blockdev.sh@762 -- # '[' nvme = nvme ']' 00:08:04.604 skipping fio tests on NVMe due to multi-ns failures. 00:08:04.604 16:17:24 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:04.604 16:17:24 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:04.604 16:17:24 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:04.604 16:17:24 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:04.604 16:17:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.604 16:17:24 -- common/autotest_common.sh@10 -- # set +x 00:08:04.604 ************************************ 00:08:04.604 START TEST bdev_verify 00:08:04.604 ************************************ 00:08:04.604 16:17:24 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:04.604 [2024-11-09 16:17:24.090420] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:04.604 [2024-11-09 16:17:24.090557] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60808 ] 00:08:04.604 [2024-11-09 16:17:24.243980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:04.866 [2024-11-09 16:17:24.473961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.866 [2024-11-09 16:17:24.474118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.439 Running I/O for 5 seconds... 00:08:10.736 00:08:10.736 Latency(us) 00:08:10.736 [2024-11-09T16:17:30.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:10.736 [2024-11-09T16:17:30.506Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.736 Verification LBA range: start 0x0 length 0xbd0bd 00:08:10.736 Nvme0n1 : 5.05 2296.73 8.97 0.00 0.00 55581.60 6755.25 58881.58 00:08:10.736 [2024-11-09T16:17:30.506Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.736 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:10.736 Nvme0n1 : 5.05 2293.79 8.96 0.00 0.00 55597.59 9931.22 65334.35 00:08:10.736 [2024-11-09T16:17:30.506Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.736 Verification LBA range: start 0x0 length 0xa0000 00:08:10.736 Nvme1n1 : 5.06 2299.40 8.98 0.00 0.00 55471.04 8670.92 56461.78 00:08:10.736 [2024-11-09T16:17:30.506Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.736 Verification LBA range: start 0xa0000 length 0xa0000 00:08:10.736 Nvme1n1 : 5.06 2292.90 8.96 0.00 0.00 55570.38 10485.76 63721.16 00:08:10.736 [2024-11-09T16:17:30.506Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.736 Verification LBA range: start 0x0 length 0x80000 00:08:10.736 Nvme2n1 : 5.06 2298.82 8.98 0.00 0.00 55346.22 7309.78 55251.89 00:08:10.736 [2024-11-09T16:17:30.506Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.736 Verification LBA range: start 0x80000 length 0x80000 00:08:10.736 Nvme2n1 : 5.06 2297.98 8.98 0.00 0.00 55340.47 3402.83 52428.80 00:08:10.736 [2024-11-09T16:17:30.506Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.736 Verification LBA range: start 0x0 length 0x80000 00:08:10.736 Nvme2n2 : 5.06 2298.00 8.98 0.00 0.00 55312.24 6906.49 56865.08 00:08:10.736 [2024-11-09T16:17:30.506Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.736 Verification LBA range: start 0x80000 length 0x80000 00:08:10.736 Nvme2n2 : 5.07 2303.87 9.00 0.00 0.00 55117.68 3188.58 52025.50 00:08:10.736 [2024-11-09T16:17:30.506Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.736 Verification LBA range: start 0x0 length 0x80000 00:08:10.736 Nvme2n3 : 5.06 2296.49 8.97 0.00 0.00 55282.98 7612.26 56865.08 00:08:10.736 [2024-11-09T16:17:30.506Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.736 Verification LBA range: start 0x80000 length 0x80000 00:08:10.736 Nvme2n3 : 5.07 2303.31 9.00 0.00 0.00 55080.38 3755.72 54445.29 00:08:10.736 [2024-11-09T16:17:30.506Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:10.736 Verification LBA range: start 0x0 length 0x20000 00:08:10.736 Nvme3n1 : 5.07 2295.75 8.97 0.00 0.00 55226.50 8469.27 59284.87 00:08:10.736 [2024-11-09T16:17:30.506Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:10.736 Verification LBA range: start 0x20000 length 0x20000 00:08:10.736 Nvme3n1 : 5.07 2302.73 9.00 0.00 0.00 55047.87 4310.25 57671.68 00:08:10.736 [2024-11-09T16:17:30.506Z] =================================================================================================================== 00:08:10.736 [2024-11-09T16:17:30.506Z] Total : 27579.77 107.73 0.00 0.00 55330.79 3188.58 65334.35 00:08:32.708 00:08:32.708 real 0m27.873s 00:08:32.708 user 0m54.196s 00:08:32.708 sys 0m0.460s 00:08:32.708 16:17:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:32.708 ************************************ 00:08:32.708 END TEST bdev_verify 00:08:32.708 ************************************ 00:08:32.708 16:17:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.708 16:17:51 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:32.708 16:17:51 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:08:32.708 16:17:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:32.708 16:17:51 -- common/autotest_common.sh@10 -- # set +x 00:08:32.708 ************************************ 00:08:32.708 START TEST bdev_verify_big_io 00:08:32.708 ************************************ 00:08:32.708 16:17:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:32.708 [2024-11-09 16:17:51.992407] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:32.708 [2024-11-09 16:17:51.992512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61123 ] 00:08:32.708 [2024-11-09 16:17:52.139610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:32.708 [2024-11-09 16:17:52.317145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.708 [2024-11-09 16:17:52.317209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.273 Running I/O for 5 seconds... 00:08:39.836 00:08:39.836 Latency(us) 00:08:39.836 [2024-11-09T16:17:59.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.836 [2024-11-09T16:17:59.606Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:39.836 Verification LBA range: start 0x0 length 0xbd0b 00:08:39.836 Nvme0n1 : 5.34 227.42 14.21 0.00 0.00 549363.93 82676.18 758201.11 00:08:39.836 [2024-11-09T16:17:59.606Z] Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:39.836 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:39.836 Nvme0n1 : 5.35 267.30 16.71 0.00 0.00 468702.90 51017.26 609787.27 00:08:39.836 [2024-11-09T16:17:59.606Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:39.836 Verification LBA range: start 0x0 length 0xa000 00:08:39.836 Nvme1n1 : 5.38 233.08 14.57 0.00 0.00 529738.97 35691.91 690446.97 00:08:39.836 [2024-11-09T16:17:59.606Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:39.836 Verification LBA range: start 0xa000 length 0xa000 00:08:39.836 Nvme1n1 : 5.35 267.23 16.70 0.00 0.00 463019.63 50815.61 558165.07 00:08:39.836 [2024-11-09T16:17:59.606Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:39.836 Verification LBA range: start 0x0 length 0x8000 00:08:39.836 Nvme2n1 : 5.38 233.02 14.56 0.00 0.00 521000.78 35893.56 622692.82 00:08:39.836 [2024-11-09T16:17:59.606Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:39.836 Verification LBA range: start 0x8000 length 0x8000 00:08:39.836 Nvme2n1 : 5.36 275.83 17.24 0.00 0.00 447432.60 4889.99 509769.26 00:08:39.836 [2024-11-09T16:17:59.606Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:39.836 Verification LBA range: start 0x0 length 0x8000 00:08:39.836 Nvme2n2 : 5.41 249.57 15.60 0.00 0.00 482521.12 17341.83 551712.30 00:08:39.836 [2024-11-09T16:17:59.606Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:39.836 Verification LBA range: start 0x8000 length 0x8000 00:08:39.836 Nvme2n2 : 5.36 275.75 17.23 0.00 0.00 441944.14 5620.97 467826.22 00:08:39.836 [2024-11-09T16:17:59.606Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:39.836 Verification LBA range: start 0x0 length 0x8000 00:08:39.836 Nvme2n3 : 5.41 249.49 15.59 0.00 0.00 474573.39 16434.41 490410.93 00:08:39.836 [2024-11-09T16:17:59.606Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:39.836 Verification LBA range: start 0x8000 length 0x8000 00:08:39.836 Nvme2n3 : 5.36 284.08 17.76 0.00 0.00 424872.94 2608.84 477505.38 00:08:39.836 [2024-11-09T16:17:59.606Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:39.836 Verification LBA range: start 0x0 length 0x2000 00:08:39.836 Nvme3n1 : 5.44 287.81 17.99 0.00 0.00 405983.97 642.76 487184.54 00:08:39.836 [2024-11-09T16:17:59.606Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:39.836 Verification LBA range: start 0x2000 length 0x2000 00:08:39.836 Nvme3n1 : 5.37 283.98 17.75 0.00 0.00 419449.78 3352.42 483958.15 00:08:39.836 [2024-11-09T16:17:59.606Z] =================================================================================================================== 00:08:39.836 [2024-11-09T16:17:59.606Z] Total : 3134.57 195.91 0.00 0.00 465571.37 642.76 758201.11 00:08:40.836 00:08:40.836 real 0m8.495s 00:08:40.836 user 0m15.911s 00:08:40.836 sys 0m0.218s 00:08:40.836 16:18:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:40.836 ************************************ 00:08:40.836 END TEST bdev_verify_big_io 00:08:40.836 ************************************ 00:08:40.836 16:18:00 -- common/autotest_common.sh@10 -- # set +x 00:08:40.836 16:18:00 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:40.836 16:18:00 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:40.836 16:18:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:40.836 16:18:00 -- common/autotest_common.sh@10 -- # set +x 00:08:40.836 ************************************ 00:08:40.836 START TEST bdev_write_zeroes 00:08:40.836 ************************************ 00:08:40.836 16:18:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:40.836 [2024-11-09 16:18:00.551413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:40.836 [2024-11-09 16:18:00.551531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61233 ] 00:08:41.098 [2024-11-09 16:18:00.702003] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.359 [2024-11-09 16:18:00.892128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.927 Running I/O for 1 seconds... 00:08:42.868 00:08:42.868 Latency(us) 00:08:42.868 [2024-11-09T16:18:02.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:42.868 [2024-11-09T16:18:02.638Z] Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:42.868 Nvme0n1 : 1.02 10445.87 40.80 0.00 0.00 12216.07 4663.14 25105.33 00:08:42.868 [2024-11-09T16:18:02.638Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:42.868 Nvme1n1 : 1.02 10433.17 40.75 0.00 0.00 12215.58 8822.15 24399.56 00:08:42.868 [2024-11-09T16:18:02.638Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:42.868 Nvme2n1 : 1.02 10421.36 40.71 0.00 0.00 12194.52 8670.92 23290.49 00:08:42.868 [2024-11-09T16:18:02.638Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:42.868 Nvme2n2 : 1.02 10462.32 40.87 0.00 0.00 12094.38 6654.42 19862.45 00:08:42.868 [2024-11-09T16:18:02.638Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:42.868 Nvme2n3 : 1.02 10450.62 40.82 0.00 0.00 12075.20 6956.90 20164.92 00:08:42.868 [2024-11-09T16:18:02.638Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:42.868 Nvme3n1 : 1.02 10438.73 40.78 0.00 0.00 12053.80 7007.31 19862.45 00:08:42.868 [2024-11-09T16:18:02.638Z] =================================================================================================================== 00:08:42.868 [2024-11-09T16:18:02.638Z] Total : 62652.07 244.73 0.00 0.00 12141.39 4663.14 25105.33 00:08:43.811 00:08:43.811 real 0m2.805s 00:08:43.811 user 0m2.473s 00:08:43.811 sys 0m0.212s 00:08:43.811 ************************************ 00:08:43.811 END TEST bdev_write_zeroes 00:08:43.811 ************************************ 00:08:43.811 16:18:03 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:43.811 16:18:03 -- common/autotest_common.sh@10 -- # set +x 00:08:43.811 16:18:03 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:43.812 16:18:03 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:43.812 16:18:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:43.812 16:18:03 -- common/autotest_common.sh@10 -- # set +x 00:08:43.812 ************************************ 00:08:43.812 START TEST bdev_json_nonenclosed 00:08:43.812 ************************************ 00:08:43.812 16:18:03 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:43.812 [2024-11-09 16:18:03.434587] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:43.812 [2024-11-09 16:18:03.434735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61286 ] 00:08:44.073 [2024-11-09 16:18:03.581108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.073 [2024-11-09 16:18:03.794855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.073 [2024-11-09 16:18:03.794994] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:44.073 [2024-11-09 16:18:03.795012] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:44.334 00:08:44.334 real 0m0.719s 00:08:44.334 user 0m0.491s 00:08:44.334 sys 0m0.121s 00:08:44.334 ************************************ 00:08:44.334 END TEST bdev_json_nonenclosed 00:08:44.334 ************************************ 00:08:44.334 16:18:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:44.334 16:18:04 -- common/autotest_common.sh@10 -- # set +x 00:08:44.595 16:18:04 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:44.595 16:18:04 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:44.595 16:18:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:44.595 16:18:04 -- common/autotest_common.sh@10 -- # set +x 00:08:44.595 ************************************ 00:08:44.595 START TEST bdev_json_nonarray 00:08:44.595 ************************************ 00:08:44.595 16:18:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:44.595 [2024-11-09 16:18:04.210397] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:44.595 [2024-11-09 16:18:04.210497] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61317 ] 00:08:44.595 [2024-11-09 16:18:04.358181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.857 [2024-11-09 16:18:04.575314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.857 [2024-11-09 16:18:04.575522] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:44.857 [2024-11-09 16:18:04.575550] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:45.119 00:08:45.119 real 0m0.734s 00:08:45.119 user 0m0.515s 00:08:45.119 sys 0m0.112s 00:08:45.119 16:18:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.381 16:18:04 -- common/autotest_common.sh@10 -- # set +x 00:08:45.381 ************************************ 00:08:45.381 END TEST bdev_json_nonarray 00:08:45.381 ************************************ 00:08:45.381 16:18:04 -- bdev/blockdev.sh@785 -- # [[ nvme == bdev ]] 00:08:45.381 16:18:04 -- bdev/blockdev.sh@792 -- # [[ nvme == gpt ]] 00:08:45.381 16:18:04 -- bdev/blockdev.sh@796 -- # [[ nvme == crypto_sw ]] 00:08:45.381 16:18:04 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:08:45.381 16:18:04 -- bdev/blockdev.sh@809 -- # cleanup 00:08:45.381 16:18:04 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:45.381 16:18:04 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:45.381 16:18:04 -- bdev/blockdev.sh@24 -- # [[ nvme == rbd ]] 00:08:45.381 16:18:04 -- bdev/blockdev.sh@28 -- # [[ nvme == daos ]] 00:08:45.381 16:18:04 -- bdev/blockdev.sh@32 -- # [[ nvme = \g\p\t ]] 00:08:45.381 16:18:04 -- bdev/blockdev.sh@38 -- # [[ nvme == xnvme ]] 00:08:45.381 00:08:45.381 real 1m0.762s 00:08:45.381 user 1m41.340s 00:08:45.381 sys 0m5.808s 00:08:45.381 16:18:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:45.381 16:18:04 -- common/autotest_common.sh@10 -- # set +x 00:08:45.381 ************************************ 00:08:45.381 END TEST blockdev_nvme 00:08:45.381 ************************************ 00:08:45.381 16:18:04 -- spdk/autotest.sh@206 -- # uname -s 00:08:45.381 16:18:04 -- spdk/autotest.sh@206 -- # [[ Linux == Linux ]] 00:08:45.381 16:18:04 -- spdk/autotest.sh@207 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:45.381 16:18:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:45.381 16:18:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:45.381 16:18:04 -- common/autotest_common.sh@10 -- # set +x 00:08:45.381 ************************************ 00:08:45.381 START TEST blockdev_nvme_gpt 00:08:45.381 ************************************ 00:08:45.381 16:18:05 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:45.381 * Looking for test storage... 00:08:45.381 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:45.381 16:18:05 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:45.381 16:18:05 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:45.381 16:18:05 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:45.381 16:18:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:45.381 16:18:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:45.381 16:18:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:45.381 16:18:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:45.381 16:18:05 -- scripts/common.sh@335 -- # IFS=.-: 00:08:45.381 16:18:05 -- scripts/common.sh@335 -- # read -ra ver1 00:08:45.381 16:18:05 -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.381 16:18:05 -- scripts/common.sh@336 -- # read -ra ver2 00:08:45.381 16:18:05 -- scripts/common.sh@337 -- # local 'op=<' 00:08:45.381 16:18:05 -- scripts/common.sh@339 -- # ver1_l=2 00:08:45.381 16:18:05 -- scripts/common.sh@340 -- # ver2_l=1 00:08:45.381 16:18:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:45.381 16:18:05 -- scripts/common.sh@343 -- # case "$op" in 00:08:45.381 16:18:05 -- scripts/common.sh@344 -- # : 1 00:08:45.381 16:18:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:45.381 16:18:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.381 16:18:05 -- scripts/common.sh@364 -- # decimal 1 00:08:45.381 16:18:05 -- scripts/common.sh@352 -- # local d=1 00:08:45.381 16:18:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.381 16:18:05 -- scripts/common.sh@354 -- # echo 1 00:08:45.381 16:18:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:45.381 16:18:05 -- scripts/common.sh@365 -- # decimal 2 00:08:45.642 16:18:05 -- scripts/common.sh@352 -- # local d=2 00:08:45.642 16:18:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.642 16:18:05 -- scripts/common.sh@354 -- # echo 2 00:08:45.642 16:18:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:45.642 16:18:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:45.642 16:18:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:45.642 16:18:05 -- scripts/common.sh@367 -- # return 0 00:08:45.642 16:18:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.642 16:18:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:45.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.642 --rc genhtml_branch_coverage=1 00:08:45.642 --rc genhtml_function_coverage=1 00:08:45.642 --rc genhtml_legend=1 00:08:45.642 --rc geninfo_all_blocks=1 00:08:45.642 --rc geninfo_unexecuted_blocks=1 00:08:45.642 00:08:45.642 ' 00:08:45.642 16:18:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:45.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.642 --rc genhtml_branch_coverage=1 00:08:45.642 --rc genhtml_function_coverage=1 00:08:45.642 --rc genhtml_legend=1 00:08:45.642 --rc geninfo_all_blocks=1 00:08:45.642 --rc geninfo_unexecuted_blocks=1 00:08:45.642 00:08:45.642 ' 00:08:45.642 16:18:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:45.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.642 --rc genhtml_branch_coverage=1 00:08:45.642 --rc genhtml_function_coverage=1 00:08:45.642 --rc genhtml_legend=1 00:08:45.642 --rc geninfo_all_blocks=1 00:08:45.642 --rc geninfo_unexecuted_blocks=1 00:08:45.642 00:08:45.642 ' 00:08:45.642 16:18:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:45.642 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.642 --rc genhtml_branch_coverage=1 00:08:45.642 --rc genhtml_function_coverage=1 00:08:45.642 --rc genhtml_legend=1 00:08:45.642 --rc geninfo_all_blocks=1 00:08:45.642 --rc geninfo_unexecuted_blocks=1 00:08:45.642 00:08:45.642 ' 00:08:45.642 16:18:05 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:45.642 16:18:05 -- bdev/nbd_common.sh@6 -- # set -e 00:08:45.642 16:18:05 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:45.642 16:18:05 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:45.642 16:18:05 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:45.642 16:18:05 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:45.642 16:18:05 -- bdev/blockdev.sh@18 -- # : 00:08:45.642 16:18:05 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:08:45.642 16:18:05 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:08:45.642 16:18:05 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:08:45.642 16:18:05 -- bdev/blockdev.sh@672 -- # uname -s 00:08:45.642 16:18:05 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:08:45.642 16:18:05 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:08:45.642 16:18:05 -- bdev/blockdev.sh@680 -- # test_type=gpt 00:08:45.643 16:18:05 -- bdev/blockdev.sh@681 -- # crypto_device= 00:08:45.643 16:18:05 -- bdev/blockdev.sh@682 -- # dek= 00:08:45.643 16:18:05 -- bdev/blockdev.sh@683 -- # env_ctx= 00:08:45.643 16:18:05 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:08:45.643 16:18:05 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:08:45.643 16:18:05 -- bdev/blockdev.sh@688 -- # [[ gpt == bdev ]] 00:08:45.643 16:18:05 -- bdev/blockdev.sh@688 -- # [[ gpt == crypto_* ]] 00:08:45.643 16:18:05 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:08:45.643 16:18:05 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=61400 00:08:45.643 16:18:05 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:45.643 16:18:05 -- bdev/blockdev.sh@47 -- # waitforlisten 61400 00:08:45.643 16:18:05 -- common/autotest_common.sh@829 -- # '[' -z 61400 ']' 00:08:45.643 16:18:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.643 16:18:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.643 16:18:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.643 16:18:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.643 16:18:05 -- common/autotest_common.sh@10 -- # set +x 00:08:45.643 16:18:05 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:45.643 [2024-11-09 16:18:05.252107] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:45.643 [2024-11-09 16:18:05.252255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61400 ] 00:08:45.643 [2024-11-09 16:18:05.402855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.905 [2024-11-09 16:18:05.618456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.905 [2024-11-09 16:18:05.618701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.291 16:18:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.291 16:18:06 -- common/autotest_common.sh@862 -- # return 0 00:08:47.291 16:18:06 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:08:47.291 16:18:06 -- bdev/blockdev.sh@700 -- # setup_gpt_conf 00:08:47.291 16:18:06 -- bdev/blockdev.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:47.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:47.551 Waiting for block devices as requested 00:08:47.551 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:08:47.811 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:08:47.811 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:08:47.811 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:08:53.097 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:08:53.097 16:18:12 -- bdev/blockdev.sh@103 -- # get_zoned_devs 00:08:53.097 16:18:12 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:08:53.097 16:18:12 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:08:53.097 16:18:12 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:08:53.097 16:18:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:53.097 16:18:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:08:53.097 16:18:12 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:08:53.097 16:18:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:53.097 16:18:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:08:53.097 16:18:12 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:08:53.097 16:18:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:53.097 16:18:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:08:53.097 16:18:12 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:08:53.097 16:18:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:53.097 16:18:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:08:53.097 16:18:12 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:08:53.097 16:18:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:53.097 16:18:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:08:53.097 16:18:12 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:08:53.097 16:18:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:53.097 16:18:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:08:53.097 16:18:12 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:08:53.097 16:18:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:08:53.097 16:18:12 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:08:53.097 16:18:12 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:08:53.097 16:18:12 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:53.097 16:18:12 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:08:53.097 16:18:12 -- bdev/blockdev.sh@105 -- # nvme_devs=('/sys/bus/pci/drivers/nvme/0000:00:06.0/nvme/nvme2/nvme2n1' '/sys/bus/pci/drivers/nvme/0000:00:07.0/nvme/nvme3/nvme3n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n1' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n2' '/sys/bus/pci/drivers/nvme/0000:00:08.0/nvme/nvme1/nvme1n3' '/sys/bus/pci/drivers/nvme/0000:00:09.0/nvme/nvme0/nvme0c0n1') 00:08:53.097 16:18:12 -- bdev/blockdev.sh@105 -- # local nvme_devs nvme_dev 00:08:53.097 16:18:12 -- bdev/blockdev.sh@106 -- # gpt_nvme= 00:08:53.097 16:18:12 -- bdev/blockdev.sh@108 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:53.097 16:18:12 -- bdev/blockdev.sh@109 -- # [[ -z '' ]] 00:08:53.097 16:18:12 -- bdev/blockdev.sh@110 -- # dev=/dev/nvme2n1 00:08:53.097 16:18:12 -- bdev/blockdev.sh@111 -- # parted /dev/nvme2n1 -ms print 00:08:53.097 16:18:12 -- bdev/blockdev.sh@111 -- # pt='Error: /dev/nvme2n1: unrecognised disk label 00:08:53.097 BYT; 00:08:53.098 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:53.098 16:18:12 -- bdev/blockdev.sh@112 -- # [[ Error: /dev/nvme2n1: unrecognised disk label 00:08:53.098 BYT; 00:08:53.098 /dev/nvme2n1:6343MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\2\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:53.098 16:18:12 -- bdev/blockdev.sh@113 -- # gpt_nvme=/dev/nvme2n1 00:08:53.098 16:18:12 -- bdev/blockdev.sh@114 -- # break 00:08:53.098 16:18:12 -- bdev/blockdev.sh@117 -- # [[ -n /dev/nvme2n1 ]] 00:08:53.098 16:18:12 -- bdev/blockdev.sh@122 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:53.098 16:18:12 -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:53.098 16:18:12 -- bdev/blockdev.sh@126 -- # parted -s /dev/nvme2n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:53.098 16:18:12 -- bdev/blockdev.sh@128 -- # get_spdk_gpt_old 00:08:53.098 16:18:12 -- scripts/common.sh@410 -- # local spdk_guid 00:08:53.098 16:18:12 -- scripts/common.sh@412 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:53.098 16:18:12 -- scripts/common.sh@414 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:53.098 16:18:12 -- scripts/common.sh@415 -- # IFS='()' 00:08:53.098 16:18:12 -- scripts/common.sh@415 -- # read -r _ spdk_guid _ 00:08:53.098 16:18:12 -- scripts/common.sh@415 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:53.098 16:18:12 -- scripts/common.sh@416 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:53.098 16:18:12 -- scripts/common.sh@416 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:53.098 16:18:12 -- scripts/common.sh@418 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:53.098 16:18:12 -- bdev/blockdev.sh@128 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:53.098 16:18:12 -- bdev/blockdev.sh@129 -- # get_spdk_gpt 00:08:53.098 16:18:12 -- scripts/common.sh@422 -- # local spdk_guid 00:08:53.098 16:18:12 -- scripts/common.sh@424 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:53.098 16:18:12 -- scripts/common.sh@426 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:53.098 16:18:12 -- scripts/common.sh@427 -- # IFS='()' 00:08:53.098 16:18:12 -- scripts/common.sh@427 -- # read -r _ spdk_guid _ 00:08:53.098 16:18:12 -- scripts/common.sh@427 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:53.098 16:18:12 -- scripts/common.sh@428 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:53.098 16:18:12 -- scripts/common.sh@428 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:53.098 16:18:12 -- scripts/common.sh@430 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:53.098 16:18:12 -- bdev/blockdev.sh@129 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:53.098 16:18:12 -- bdev/blockdev.sh@130 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme2n1 00:08:54.032 The operation has completed successfully. 00:08:54.032 16:18:13 -- bdev/blockdev.sh@131 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme2n1 00:08:55.013 The operation has completed successfully. 00:08:55.013 16:18:14 -- bdev/blockdev.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:55.947 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:55.947 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.947 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.947 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:08:55.947 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:08:56.206 16:18:15 -- bdev/blockdev.sh@133 -- # rpc_cmd bdev_get_bdevs 00:08:56.206 16:18:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.206 16:18:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.206 [] 00:08:56.206 16:18:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.206 16:18:15 -- bdev/blockdev.sh@134 -- # setup_nvme_conf 00:08:56.206 16:18:15 -- bdev/blockdev.sh@79 -- # local json 00:08:56.206 16:18:15 -- bdev/blockdev.sh@80 -- # mapfile -t json 00:08:56.206 16:18:15 -- bdev/blockdev.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:56.206 16:18:15 -- bdev/blockdev.sh@81 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:06.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:07.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:08.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:09.0" } } ] }'\''' 00:08:56.206 16:18:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.206 16:18:15 -- common/autotest_common.sh@10 -- # set +x 00:08:56.465 16:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.465 16:18:16 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:08:56.465 16:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.465 16:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:56.465 16:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.465 16:18:16 -- bdev/blockdev.sh@738 -- # cat 00:08:56.465 16:18:16 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:08:56.465 16:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.465 16:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:56.465 16:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.465 16:18:16 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:08:56.465 16:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.465 16:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:56.465 16:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.465 16:18:16 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:56.465 16:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.465 16:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:56.465 16:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.465 16:18:16 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:08:56.465 16:18:16 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:08:56.465 16:18:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.465 16:18:16 -- common/autotest_common.sh@10 -- # set +x 00:08:56.465 16:18:16 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:08:56.465 16:18:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:56.465 16:18:16 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:08:56.465 16:18:16 -- bdev/blockdev.sh@747 -- # jq -r .name 00:08:56.466 16:18:16 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Nvme0n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774144,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme0n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 774143,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme0n1",' ' "offset_blocks": 774400,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "6b4eaee9-4560-4c4c-a6f9-1851bfa39e8a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6b4eaee9-4560-4c4c-a6f9-1851bfa39e8a",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:07.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:07.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f7d818a9-3cf0-4221-8a67-2fb039d54fe0"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f7d818a9-3cf0-4221-8a67-2fb039d54fe0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "a60d522b-3958-4a22-8797-448a1c039a87"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a60d522b-3958-4a22-8797-448a1c039a87",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "b7da047d-4df6-4967-87b6-1107721a6398"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "b7da047d-4df6-4967-87b6-1107721a6398",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:08.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:08.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "769f0aff-7466-405d-954a-54fb610147bd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "769f0aff-7466-405d-954a-54fb610147bd",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": true,' ' "nvme_io": true' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:09.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:09.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:56.466 16:18:16 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:08:56.466 16:18:16 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Nvme0n1p1 00:08:56.466 16:18:16 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:08:56.466 16:18:16 -- bdev/blockdev.sh@752 -- # killprocess 61400 00:08:56.466 16:18:16 -- common/autotest_common.sh@936 -- # '[' -z 61400 ']' 00:08:56.466 16:18:16 -- common/autotest_common.sh@940 -- # kill -0 61400 00:08:56.466 16:18:16 -- common/autotest_common.sh@941 -- # uname 00:08:56.466 16:18:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:56.466 16:18:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61400 00:08:56.466 16:18:16 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:56.466 killing process with pid 61400 00:08:56.466 16:18:16 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:56.466 16:18:16 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61400' 00:08:56.466 16:18:16 -- common/autotest_common.sh@955 -- # kill 61400 00:08:56.466 16:18:16 -- common/autotest_common.sh@960 -- # wait 61400 00:08:57.840 16:18:17 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:57.840 16:18:17 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:57.840 16:18:17 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:08:57.840 16:18:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:57.840 16:18:17 -- common/autotest_common.sh@10 -- # set +x 00:08:57.840 ************************************ 00:08:57.840 START TEST bdev_hello_world 00:08:57.840 ************************************ 00:08:57.840 16:18:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1p1 '' 00:08:57.840 [2024-11-09 16:18:17.465893] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:57.840 [2024-11-09 16:18:17.466009] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62052 ] 00:08:58.097 [2024-11-09 16:18:17.615627] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.097 [2024-11-09 16:18:17.755672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.669 [2024-11-09 16:18:18.234195] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:58.669 [2024-11-09 16:18:18.234297] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1p1 00:08:58.670 [2024-11-09 16:18:18.234333] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:58.670 [2024-11-09 16:18:18.237124] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:58.670 [2024-11-09 16:18:18.237944] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:58.670 [2024-11-09 16:18:18.238001] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:58.670 [2024-11-09 16:18:18.238877] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:58.670 00:08:58.670 [2024-11-09 16:18:18.238940] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:59.610 00:08:59.610 real 0m1.694s 00:08:59.610 user 0m1.410s 00:08:59.610 sys 0m0.175s 00:08:59.610 16:18:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:59.610 ************************************ 00:08:59.610 END TEST bdev_hello_world 00:08:59.610 ************************************ 00:08:59.610 16:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:59.610 16:18:19 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:08:59.610 16:18:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:59.610 16:18:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:59.610 16:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:59.610 ************************************ 00:08:59.610 START TEST bdev_bounds 00:08:59.610 ************************************ 00:08:59.610 16:18:19 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:08:59.610 16:18:19 -- bdev/blockdev.sh@288 -- # bdevio_pid=62094 00:08:59.610 16:18:19 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:59.610 Process bdevio pid: 62094 00:08:59.610 16:18:19 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 62094' 00:08:59.610 16:18:19 -- bdev/blockdev.sh@291 -- # waitforlisten 62094 00:08:59.610 16:18:19 -- common/autotest_common.sh@829 -- # '[' -z 62094 ']' 00:08:59.610 16:18:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.610 16:18:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:59.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.610 16:18:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.610 16:18:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:59.610 16:18:19 -- common/autotest_common.sh@10 -- # set +x 00:08:59.610 16:18:19 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:59.610 [2024-11-09 16:18:19.213656] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:08:59.610 [2024-11-09 16:18:19.213783] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62094 ] 00:08:59.610 [2024-11-09 16:18:19.364576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:59.868 [2024-11-09 16:18:19.515877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:59.868 [2024-11-09 16:18:19.516024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.868 [2024-11-09 16:18:19.516058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.434 16:18:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:00.434 16:18:20 -- common/autotest_common.sh@862 -- # return 0 00:09:00.434 16:18:20 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:09:00.434 I/O targets: 00:09:00.434 Nvme0n1p1: 774144 blocks of 4096 bytes (3024 MiB) 00:09:00.434 Nvme0n1p2: 774143 blocks of 4096 bytes (3024 MiB) 00:09:00.434 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:09:00.434 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:00.434 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:00.434 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:09:00.434 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:09:00.434 00:09:00.434 00:09:00.434 CUnit - A unit testing framework for C - Version 2.1-3 00:09:00.434 http://cunit.sourceforge.net/ 00:09:00.434 00:09:00.434 00:09:00.434 Suite: bdevio tests on: Nvme3n1 00:09:00.434 Test: blockdev write read block ...passed 00:09:00.434 Test: blockdev write zeroes read block ...passed 00:09:00.434 Test: blockdev write zeroes read no split ...passed 00:09:00.434 Test: blockdev write zeroes read split ...passed 00:09:00.434 Test: blockdev write zeroes read split partial ...passed 00:09:00.434 Test: blockdev reset ...[2024-11-09 16:18:20.181366] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:00.434 [2024-11-09 16:18:20.183804] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:00.434 passed 00:09:00.434 Test: blockdev write read 8 blocks ...passed 00:09:00.434 Test: blockdev write read size > 128k ...passed 00:09:00.434 Test: blockdev write read invalid size ...passed 00:09:00.434 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.434 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.434 Test: blockdev write read max offset ...passed 00:09:00.434 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.434 Test: blockdev writev readv 8 blocks ...passed 00:09:00.434 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.434 Test: blockdev writev readv block ...passed 00:09:00.434 Test: blockdev writev readv size > 128k ...passed 00:09:00.434 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.434 Test: blockdev comparev and writev ...[2024-11-09 16:18:20.194488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27d60a000 len:0x1000 00:09:00.434 [2024-11-09 16:18:20.194530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:00.434 passed 00:09:00.434 Test: blockdev nvme passthru rw ...passed 00:09:00.434 Test: blockdev nvme passthru vendor specific ...[2024-11-09 16:18:20.196186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:00.434 [2024-11-09 16:18:20.196237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:00.434 passed 00:09:00.434 Test: blockdev nvme admin passthru ...passed 00:09:00.434 Test: blockdev copy ...passed 00:09:00.434 Suite: bdevio tests on: Nvme2n3 00:09:00.434 Test: blockdev write read block ...passed 00:09:00.692 Test: blockdev write zeroes read block ...passed 00:09:00.692 Test: blockdev write zeroes read no split ...passed 00:09:00.692 Test: blockdev write zeroes read split ...passed 00:09:00.692 Test: blockdev write zeroes read split partial ...passed 00:09:00.692 Test: blockdev reset ...[2024-11-09 16:18:20.250395] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:00.692 passed 00:09:00.692 Test: blockdev write read 8 blocks ...[2024-11-09 16:18:20.253747] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:00.692 passed 00:09:00.692 Test: blockdev write read size > 128k ...passed 00:09:00.692 Test: blockdev write read invalid size ...passed 00:09:00.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.692 Test: blockdev write read max offset ...passed 00:09:00.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.692 Test: blockdev writev readv 8 blocks ...passed 00:09:00.692 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.692 Test: blockdev writev readv block ...passed 00:09:00.692 Test: blockdev writev readv size > 128k ...passed 00:09:00.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.692 Test: blockdev comparev and writev ...[2024-11-09 16:18:20.263346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x273704000 len:0x1000 00:09:00.692 [2024-11-09 16:18:20.263381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:00.692 passed 00:09:00.692 Test: blockdev nvme passthru rw ...passed 00:09:00.692 Test: blockdev nvme passthru vendor specific ...[2024-11-09 16:18:20.264684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:00.692 [2024-11-09 16:18:20.264710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:00.692 passed 00:09:00.692 Test: blockdev nvme admin passthru ...passed 00:09:00.692 Test: blockdev copy ...passed 00:09:00.692 Suite: bdevio tests on: Nvme2n2 00:09:00.692 Test: blockdev write read block ...passed 00:09:00.692 Test: blockdev write zeroes read block ...passed 00:09:00.692 Test: blockdev write zeroes read no split ...passed 00:09:00.692 Test: blockdev write zeroes read split ...passed 00:09:00.692 Test: blockdev write zeroes read split partial ...passed 00:09:00.692 Test: blockdev reset ...[2024-11-09 16:18:20.320401] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:00.692 [2024-11-09 16:18:20.322823] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:00.692 passed 00:09:00.692 Test: blockdev write read 8 blocks ...passed 00:09:00.692 Test: blockdev write read size > 128k ...passed 00:09:00.692 Test: blockdev write read invalid size ...passed 00:09:00.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.692 Test: blockdev write read max offset ...passed 00:09:00.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.692 Test: blockdev writev readv 8 blocks ...passed 00:09:00.692 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.692 Test: blockdev writev readv block ...passed 00:09:00.692 Test: blockdev writev readv size > 128k ...passed 00:09:00.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.692 Test: blockdev comparev and writev ...[2024-11-09 16:18:20.333185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x273704000 len:0x1000 00:09:00.692 [2024-11-09 16:18:20.333220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:00.692 passed 00:09:00.692 Test: blockdev nvme passthru rw ...passed 00:09:00.692 Test: blockdev nvme passthru vendor specific ...[2024-11-09 16:18:20.334480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:00.692 [2024-11-09 16:18:20.334505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:00.692 passed 00:09:00.692 Test: blockdev nvme admin passthru ...passed 00:09:00.692 Test: blockdev copy ...passed 00:09:00.692 Suite: bdevio tests on: Nvme2n1 00:09:00.692 Test: blockdev write read block ...passed 00:09:00.692 Test: blockdev write zeroes read block ...passed 00:09:00.692 Test: blockdev write zeroes read no split ...passed 00:09:00.692 Test: blockdev write zeroes read split ...passed 00:09:00.692 Test: blockdev write zeroes read split partial ...passed 00:09:00.692 Test: blockdev reset ...[2024-11-09 16:18:20.389683] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:00.692 passed 00:09:00.692 Test: blockdev write read 8 blocks ...[2024-11-09 16:18:20.392090] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:00.692 passed 00:09:00.692 Test: blockdev write read size > 128k ...passed 00:09:00.692 Test: blockdev write read invalid size ...passed 00:09:00.692 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.692 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.692 Test: blockdev write read max offset ...passed 00:09:00.692 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.692 Test: blockdev writev readv 8 blocks ...passed 00:09:00.692 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.692 Test: blockdev writev readv block ...passed 00:09:00.692 Test: blockdev writev readv size > 128k ...passed 00:09:00.692 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.692 Test: blockdev comparev and writev ...[2024-11-09 16:18:20.402685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27c63c000 len:0x1000 00:09:00.692 [2024-11-09 16:18:20.402721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:00.692 passed 00:09:00.692 Test: blockdev nvme passthru rw ...passed 00:09:00.692 Test: blockdev nvme passthru vendor specific ...[2024-11-09 16:18:20.404151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:00.692 [2024-11-09 16:18:20.404175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:00.692 passed 00:09:00.692 Test: blockdev nvme admin passthru ...passed 00:09:00.692 Test: blockdev copy ...passed 00:09:00.692 Suite: bdevio tests on: Nvme1n1 00:09:00.692 Test: blockdev write read block ...passed 00:09:00.692 Test: blockdev write zeroes read block ...passed 00:09:00.692 Test: blockdev write zeroes read no split ...passed 00:09:00.692 Test: blockdev write zeroes read split ...passed 00:09:00.692 Test: blockdev write zeroes read split partial ...passed 00:09:00.692 Test: blockdev reset ...[2024-11-09 16:18:20.458569] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:00.692 [2024-11-09 16:18:20.460841] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:00.692 passed 00:09:00.951 Test: blockdev write read 8 blocks ...passed 00:09:00.951 Test: blockdev write read size > 128k ...passed 00:09:00.951 Test: blockdev write read invalid size ...passed 00:09:00.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.951 Test: blockdev write read max offset ...passed 00:09:00.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.951 Test: blockdev writev readv 8 blocks ...passed 00:09:00.951 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.951 Test: blockdev writev readv block ...passed 00:09:00.951 Test: blockdev writev readv size > 128k ...passed 00:09:00.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.951 Test: blockdev comparev and writev ...[2024-11-09 16:18:20.471216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x27c638000 len:0x1000 00:09:00.951 [2024-11-09 16:18:20.471258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:09:00.951 passed 00:09:00.951 Test: blockdev nvme passthru rw ...passed 00:09:00.951 Test: blockdev nvme passthru vendor specific ...[2024-11-09 16:18:20.472516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:09:00.951 [2024-11-09 16:18:20.472540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:09:00.951 passed 00:09:00.951 Test: blockdev nvme admin passthru ...passed 00:09:00.951 Test: blockdev copy ...passed 00:09:00.951 Suite: bdevio tests on: Nvme0n1p2 00:09:00.951 Test: blockdev write read block ...passed 00:09:00.951 Test: blockdev write zeroes read block ...passed 00:09:00.951 Test: blockdev write zeroes read no split ...passed 00:09:00.951 Test: blockdev write zeroes read split ...passed 00:09:00.951 Test: blockdev write zeroes read split partial ...passed 00:09:00.951 Test: blockdev reset ...[2024-11-09 16:18:20.527109] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:00.951 passed 00:09:00.951 Test: blockdev write read 8 blocks ...[2024-11-09 16:18:20.529386] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:00.951 passed 00:09:00.951 Test: blockdev write read size > 128k ...passed 00:09:00.951 Test: blockdev write read invalid size ...passed 00:09:00.951 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.951 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.951 Test: blockdev write read max offset ...passed 00:09:00.951 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.951 Test: blockdev writev readv 8 blocks ...passed 00:09:00.951 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.951 Test: blockdev writev readv block ...passed 00:09:00.951 Test: blockdev writev readv size > 128k ...passed 00:09:00.951 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.951 Test: blockdev comparev and writev ...passed 00:09:00.951 Test: blockdev nvme passthru rw ...passed 00:09:00.951 Test: blockdev nvme passthru vendor specific ...passed 00:09:00.951 Test: blockdev nvme admin passthru ...passed 00:09:00.951 Test: blockdev copy ...[2024-11-09 16:18:20.538869] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p2 since it has 00:09:00.951 separate metadata which is not supported yet. 00:09:00.951 passed 00:09:00.952 Suite: bdevio tests on: Nvme0n1p1 00:09:00.952 Test: blockdev write read block ...passed 00:09:00.952 Test: blockdev write zeroes read block ...passed 00:09:00.952 Test: blockdev write zeroes read no split ...passed 00:09:00.952 Test: blockdev write zeroes read split ...passed 00:09:00.952 Test: blockdev write zeroes read split partial ...passed 00:09:00.952 Test: blockdev reset ...[2024-11-09 16:18:20.585564] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:00.952 [2024-11-09 16:18:20.587784] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:00.952 passed 00:09:00.952 Test: blockdev write read 8 blocks ...passed 00:09:00.952 Test: blockdev write read size > 128k ...passed 00:09:00.952 Test: blockdev write read invalid size ...passed 00:09:00.952 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:00.952 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:00.952 Test: blockdev write read max offset ...passed 00:09:00.952 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:00.952 Test: blockdev writev readv 8 blocks ...passed 00:09:00.952 Test: blockdev writev readv 30 x 1block ...passed 00:09:00.952 Test: blockdev writev readv block ...passed 00:09:00.952 Test: blockdev writev readv size > 128k ...passed 00:09:00.952 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:00.952 Test: blockdev comparev and writev ...passed 00:09:00.952 Test: blockdev nvme passthru rw ...passed 00:09:00.952 Test: blockdev nvme passthru vendor specific ...passed 00:09:00.952 Test: blockdev nvme admin passthru ...passed 00:09:00.952 Test: blockdev copy ...[2024-11-09 16:18:20.597605] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1p1 since it has 00:09:00.952 separate metadata which is not supported yet. 00:09:00.952 passed 00:09:00.952 00:09:00.952 Run Summary: Type Total Ran Passed Failed Inactive 00:09:00.952 suites 7 7 n/a 0 0 00:09:00.952 tests 161 161 161 0 0 00:09:00.952 asserts 1006 1006 1006 0 n/a 00:09:00.952 00:09:00.952 Elapsed time = 1.244 seconds 00:09:00.952 0 00:09:00.952 16:18:20 -- bdev/blockdev.sh@293 -- # killprocess 62094 00:09:00.952 16:18:20 -- common/autotest_common.sh@936 -- # '[' -z 62094 ']' 00:09:00.952 16:18:20 -- common/autotest_common.sh@940 -- # kill -0 62094 00:09:00.952 16:18:20 -- common/autotest_common.sh@941 -- # uname 00:09:00.952 16:18:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:00.952 16:18:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62094 00:09:00.952 16:18:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:00.952 16:18:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:00.952 killing process with pid 62094 00:09:00.952 16:18:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62094' 00:09:00.952 16:18:20 -- common/autotest_common.sh@955 -- # kill 62094 00:09:00.952 16:18:20 -- common/autotest_common.sh@960 -- # wait 62094 00:09:01.519 16:18:21 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:09:01.519 00:09:01.519 real 0m2.047s 00:09:01.519 user 0m4.984s 00:09:01.519 sys 0m0.290s 00:09:01.519 16:18:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.519 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:09:01.519 ************************************ 00:09:01.519 END TEST bdev_bounds 00:09:01.519 ************************************ 00:09:01.519 16:18:21 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:01.519 16:18:21 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:09:01.519 16:18:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.519 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:09:01.519 ************************************ 00:09:01.519 START TEST bdev_nbd 00:09:01.519 ************************************ 00:09:01.519 16:18:21 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:09:01.519 16:18:21 -- bdev/blockdev.sh@298 -- # uname -s 00:09:01.519 16:18:21 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:09:01.519 16:18:21 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:01.519 16:18:21 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:01.519 16:18:21 -- bdev/blockdev.sh@302 -- # bdev_all=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:01.519 16:18:21 -- bdev/blockdev.sh@302 -- # local bdev_all 00:09:01.519 16:18:21 -- bdev/blockdev.sh@303 -- # local bdev_num=7 00:09:01.519 16:18:21 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:09:01.519 16:18:21 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:09:01.519 16:18:21 -- bdev/blockdev.sh@309 -- # local nbd_all 00:09:01.519 16:18:21 -- bdev/blockdev.sh@310 -- # bdev_num=7 00:09:01.519 16:18:21 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:01.519 16:18:21 -- bdev/blockdev.sh@312 -- # local nbd_list 00:09:01.520 16:18:21 -- bdev/blockdev.sh@313 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:01.520 16:18:21 -- bdev/blockdev.sh@313 -- # local bdev_list 00:09:01.520 16:18:21 -- bdev/blockdev.sh@316 -- # nbd_pid=62148 00:09:01.520 16:18:21 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:09:01.520 16:18:21 -- bdev/blockdev.sh@318 -- # waitforlisten 62148 /var/tmp/spdk-nbd.sock 00:09:01.520 16:18:21 -- common/autotest_common.sh@829 -- # '[' -z 62148 ']' 00:09:01.520 16:18:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:01.520 16:18:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.520 16:18:21 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:09:01.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:01.520 16:18:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:01.520 16:18:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.520 16:18:21 -- common/autotest_common.sh@10 -- # set +x 00:09:01.780 [2024-11-09 16:18:21.300561] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:01.780 [2024-11-09 16:18:21.300662] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:01.780 [2024-11-09 16:18:21.450440] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.039 [2024-11-09 16:18:21.591093] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.605 16:18:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:02.605 16:18:22 -- common/autotest_common.sh@862 -- # return 0 00:09:02.605 16:18:22 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@24 -- # local i 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:09:02.605 16:18:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:02.605 16:18:22 -- common/autotest_common.sh@867 -- # local i 00:09:02.605 16:18:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.605 16:18:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.605 16:18:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:02.605 16:18:22 -- common/autotest_common.sh@871 -- # break 00:09:02.605 16:18:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.605 16:18:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.605 16:18:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.605 1+0 records in 00:09:02.605 1+0 records out 00:09:02.605 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000570471 s, 7.2 MB/s 00:09:02.605 16:18:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.605 16:18:22 -- common/autotest_common.sh@884 -- # size=4096 00:09:02.605 16:18:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.605 16:18:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.605 16:18:22 -- common/autotest_common.sh@887 -- # return 0 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:02.605 16:18:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 00:09:02.864 16:18:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:09:02.864 16:18:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:09:02.864 16:18:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:09:02.864 16:18:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:02.864 16:18:22 -- common/autotest_common.sh@867 -- # local i 00:09:02.864 16:18:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:02.864 16:18:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:02.864 16:18:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:02.864 16:18:22 -- common/autotest_common.sh@871 -- # break 00:09:02.864 16:18:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:02.864 16:18:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:02.864 16:18:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:02.864 1+0 records in 00:09:02.864 1+0 records out 00:09:02.864 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597262 s, 6.9 MB/s 00:09:02.864 16:18:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.864 16:18:22 -- common/autotest_common.sh@884 -- # size=4096 00:09:02.864 16:18:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:02.864 16:18:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:02.864 16:18:22 -- common/autotest_common.sh@887 -- # return 0 00:09:02.864 16:18:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:02.864 16:18:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:02.864 16:18:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:09:03.122 16:18:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:09:03.122 16:18:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:09:03.122 16:18:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:09:03.122 16:18:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:09:03.122 16:18:22 -- common/autotest_common.sh@867 -- # local i 00:09:03.122 16:18:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:03.122 16:18:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:03.122 16:18:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:09:03.122 16:18:22 -- common/autotest_common.sh@871 -- # break 00:09:03.122 16:18:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:03.122 16:18:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:03.122 16:18:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:03.122 1+0 records in 00:09:03.122 1+0 records out 00:09:03.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000823084 s, 5.0 MB/s 00:09:03.122 16:18:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.122 16:18:22 -- common/autotest_common.sh@884 -- # size=4096 00:09:03.122 16:18:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.122 16:18:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:03.122 16:18:22 -- common/autotest_common.sh@887 -- # return 0 00:09:03.122 16:18:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:03.122 16:18:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:03.122 16:18:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:09:03.122 16:18:22 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:09:03.122 16:18:22 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:09:03.122 16:18:22 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:09:03.122 16:18:22 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:09:03.122 16:18:22 -- common/autotest_common.sh@867 -- # local i 00:09:03.122 16:18:22 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:03.122 16:18:22 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:03.122 16:18:22 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:09:03.380 16:18:22 -- common/autotest_common.sh@871 -- # break 00:09:03.380 16:18:22 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:03.380 16:18:22 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:03.380 16:18:22 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:03.380 1+0 records in 00:09:03.380 1+0 records out 00:09:03.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000651451 s, 6.3 MB/s 00:09:03.380 16:18:22 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.380 16:18:22 -- common/autotest_common.sh@884 -- # size=4096 00:09:03.380 16:18:22 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.380 16:18:22 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:03.380 16:18:22 -- common/autotest_common.sh@887 -- # return 0 00:09:03.380 16:18:22 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:03.380 16:18:22 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:03.380 16:18:22 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:09:03.380 16:18:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:09:03.380 16:18:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:09:03.380 16:18:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:09:03.380 16:18:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:09:03.380 16:18:23 -- common/autotest_common.sh@867 -- # local i 00:09:03.380 16:18:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:03.380 16:18:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:03.380 16:18:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:09:03.380 16:18:23 -- common/autotest_common.sh@871 -- # break 00:09:03.380 16:18:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:03.380 16:18:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:03.381 16:18:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:03.381 1+0 records in 00:09:03.381 1+0 records out 00:09:03.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726917 s, 5.6 MB/s 00:09:03.381 16:18:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.381 16:18:23 -- common/autotest_common.sh@884 -- # size=4096 00:09:03.381 16:18:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.381 16:18:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:03.381 16:18:23 -- common/autotest_common.sh@887 -- # return 0 00:09:03.381 16:18:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:03.381 16:18:23 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:03.381 16:18:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:09:03.639 16:18:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:09:03.639 16:18:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:09:03.639 16:18:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:09:03.639 16:18:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:09:03.639 16:18:23 -- common/autotest_common.sh@867 -- # local i 00:09:03.639 16:18:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:03.639 16:18:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:03.639 16:18:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:09:03.639 16:18:23 -- common/autotest_common.sh@871 -- # break 00:09:03.639 16:18:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:03.639 16:18:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:03.639 16:18:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:03.639 1+0 records in 00:09:03.639 1+0 records out 00:09:03.639 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101701 s, 4.0 MB/s 00:09:03.639 16:18:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.639 16:18:23 -- common/autotest_common.sh@884 -- # size=4096 00:09:03.639 16:18:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.639 16:18:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:03.639 16:18:23 -- common/autotest_common.sh@887 -- # return 0 00:09:03.639 16:18:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:03.639 16:18:23 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:03.639 16:18:23 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:09:03.897 16:18:23 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:09:03.897 16:18:23 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:09:03.897 16:18:23 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:09:03.897 16:18:23 -- common/autotest_common.sh@866 -- # local nbd_name=nbd6 00:09:03.897 16:18:23 -- common/autotest_common.sh@867 -- # local i 00:09:03.897 16:18:23 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:03.897 16:18:23 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:03.897 16:18:23 -- common/autotest_common.sh@870 -- # grep -q -w nbd6 /proc/partitions 00:09:03.897 16:18:23 -- common/autotest_common.sh@871 -- # break 00:09:03.897 16:18:23 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:03.897 16:18:23 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:03.897 16:18:23 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:03.897 1+0 records in 00:09:03.897 1+0 records out 00:09:03.897 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722422 s, 5.7 MB/s 00:09:03.897 16:18:23 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.897 16:18:23 -- common/autotest_common.sh@884 -- # size=4096 00:09:03.897 16:18:23 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:03.897 16:18:23 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:03.897 16:18:23 -- common/autotest_common.sh@887 -- # return 0 00:09:03.897 16:18:23 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:09:03.897 16:18:23 -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:09:03.897 16:18:23 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:04.155 16:18:23 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd0", 00:09:04.156 "bdev_name": "Nvme0n1p1" 00:09:04.156 }, 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd1", 00:09:04.156 "bdev_name": "Nvme0n1p2" 00:09:04.156 }, 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd2", 00:09:04.156 "bdev_name": "Nvme1n1" 00:09:04.156 }, 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd3", 00:09:04.156 "bdev_name": "Nvme2n1" 00:09:04.156 }, 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd4", 00:09:04.156 "bdev_name": "Nvme2n2" 00:09:04.156 }, 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd5", 00:09:04.156 "bdev_name": "Nvme2n3" 00:09:04.156 }, 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd6", 00:09:04.156 "bdev_name": "Nvme3n1" 00:09:04.156 } 00:09:04.156 ]' 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@119 -- # echo '[ 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd0", 00:09:04.156 "bdev_name": "Nvme0n1p1" 00:09:04.156 }, 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd1", 00:09:04.156 "bdev_name": "Nvme0n1p2" 00:09:04.156 }, 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd2", 00:09:04.156 "bdev_name": "Nvme1n1" 00:09:04.156 }, 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd3", 00:09:04.156 "bdev_name": "Nvme2n1" 00:09:04.156 }, 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd4", 00:09:04.156 "bdev_name": "Nvme2n2" 00:09:04.156 }, 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd5", 00:09:04.156 "bdev_name": "Nvme2n3" 00:09:04.156 }, 00:09:04.156 { 00:09:04.156 "nbd_device": "/dev/nbd6", 00:09:04.156 "bdev_name": "Nvme3n1" 00:09:04.156 } 00:09:04.156 ]' 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@51 -- # local i 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@41 -- # break 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.156 16:18:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:04.414 16:18:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:04.414 16:18:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:04.414 16:18:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:04.414 16:18:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.414 16:18:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.414 16:18:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:04.414 16:18:24 -- bdev/nbd_common.sh@41 -- # break 00:09:04.414 16:18:24 -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.414 16:18:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.414 16:18:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:09:04.673 16:18:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:09:04.673 16:18:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:09:04.673 16:18:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:09:04.673 16:18:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.673 16:18:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.673 16:18:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:09:04.673 16:18:24 -- bdev/nbd_common.sh@41 -- # break 00:09:04.673 16:18:24 -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.673 16:18:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.673 16:18:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:09:04.673 16:18:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@41 -- # break 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@41 -- # break 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@45 -- # return 0 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:04.932 16:18:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:09:05.190 16:18:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:09:05.190 16:18:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:09:05.190 16:18:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:09:05.190 16:18:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.190 16:18:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.190 16:18:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:09:05.190 16:18:24 -- bdev/nbd_common.sh@41 -- # break 00:09:05.190 16:18:24 -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.190 16:18:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:05.190 16:18:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@41 -- # break 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@45 -- # return 0 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@65 -- # true 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@65 -- # count=0 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@122 -- # count=0 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@127 -- # return 0 00:09:05.544 16:18:25 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1p1 Nvme0n1p2 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1p1' 'Nvme0n1p2' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@12 -- # local i 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:05.544 16:18:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p1 /dev/nbd0 00:09:05.803 /dev/nbd0 00:09:05.803 16:18:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:05.803 16:18:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:05.803 16:18:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:09:05.803 16:18:25 -- common/autotest_common.sh@867 -- # local i 00:09:05.803 16:18:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:05.803 16:18:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:05.803 16:18:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:09:05.803 16:18:25 -- common/autotest_common.sh@871 -- # break 00:09:05.803 16:18:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:05.803 16:18:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:05.803 16:18:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:05.803 1+0 records in 00:09:05.803 1+0 records out 00:09:05.803 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485819 s, 8.4 MB/s 00:09:05.803 16:18:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.803 16:18:25 -- common/autotest_common.sh@884 -- # size=4096 00:09:05.803 16:18:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:05.803 16:18:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:05.803 16:18:25 -- common/autotest_common.sh@887 -- # return 0 00:09:05.803 16:18:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:05.803 16:18:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:05.803 16:18:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1p2 /dev/nbd1 00:09:06.061 /dev/nbd1 00:09:06.061 16:18:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:06.061 16:18:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:06.061 16:18:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:09:06.061 16:18:25 -- common/autotest_common.sh@867 -- # local i 00:09:06.061 16:18:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:06.061 16:18:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:06.061 16:18:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:09:06.061 16:18:25 -- common/autotest_common.sh@871 -- # break 00:09:06.061 16:18:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:06.061 16:18:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:06.061 16:18:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:06.061 1+0 records in 00:09:06.061 1+0 records out 00:09:06.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062117 s, 6.6 MB/s 00:09:06.061 16:18:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.061 16:18:25 -- common/autotest_common.sh@884 -- # size=4096 00:09:06.061 16:18:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.061 16:18:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:06.061 16:18:25 -- common/autotest_common.sh@887 -- # return 0 00:09:06.061 16:18:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:06.061 16:18:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:06.061 16:18:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd10 00:09:06.319 /dev/nbd10 00:09:06.319 16:18:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:09:06.319 16:18:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:09:06.319 16:18:25 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:09:06.319 16:18:25 -- common/autotest_common.sh@867 -- # local i 00:09:06.319 16:18:25 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:06.319 16:18:25 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:06.319 16:18:25 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:09:06.319 16:18:25 -- common/autotest_common.sh@871 -- # break 00:09:06.319 16:18:25 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:06.319 16:18:25 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:06.319 16:18:25 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:06.319 1+0 records in 00:09:06.319 1+0 records out 00:09:06.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000637744 s, 6.4 MB/s 00:09:06.319 16:18:25 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.319 16:18:25 -- common/autotest_common.sh@884 -- # size=4096 00:09:06.319 16:18:25 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.319 16:18:25 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:06.319 16:18:25 -- common/autotest_common.sh@887 -- # return 0 00:09:06.319 16:18:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:06.319 16:18:25 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:06.319 16:18:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:09:06.576 /dev/nbd11 00:09:06.576 16:18:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:09:06.576 16:18:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:09:06.576 16:18:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:09:06.576 16:18:26 -- common/autotest_common.sh@867 -- # local i 00:09:06.576 16:18:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:06.577 16:18:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:06.577 16:18:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:09:06.577 16:18:26 -- common/autotest_common.sh@871 -- # break 00:09:06.577 16:18:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:06.577 16:18:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:06.577 16:18:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:06.577 1+0 records in 00:09:06.577 1+0 records out 00:09:06.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000417883 s, 9.8 MB/s 00:09:06.577 16:18:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.577 16:18:26 -- common/autotest_common.sh@884 -- # size=4096 00:09:06.577 16:18:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.577 16:18:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:06.577 16:18:26 -- common/autotest_common.sh@887 -- # return 0 00:09:06.577 16:18:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:06.577 16:18:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:06.577 16:18:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:09:06.577 /dev/nbd12 00:09:06.577 16:18:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:09:06.577 16:18:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:09:06.577 16:18:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:09:06.577 16:18:26 -- common/autotest_common.sh@867 -- # local i 00:09:06.577 16:18:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:06.577 16:18:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:06.577 16:18:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:09:06.577 16:18:26 -- common/autotest_common.sh@871 -- # break 00:09:06.577 16:18:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:06.577 16:18:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:06.577 16:18:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:06.577 1+0 records in 00:09:06.577 1+0 records out 00:09:06.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625266 s, 6.6 MB/s 00:09:06.577 16:18:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.577 16:18:26 -- common/autotest_common.sh@884 -- # size=4096 00:09:06.577 16:18:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.577 16:18:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:06.577 16:18:26 -- common/autotest_common.sh@887 -- # return 0 00:09:06.577 16:18:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:06.577 16:18:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:06.577 16:18:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:09:06.835 /dev/nbd13 00:09:06.835 16:18:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:09:06.835 16:18:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:09:06.835 16:18:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:09:06.835 16:18:26 -- common/autotest_common.sh@867 -- # local i 00:09:06.835 16:18:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:06.835 16:18:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:06.835 16:18:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:09:06.835 16:18:26 -- common/autotest_common.sh@871 -- # break 00:09:06.835 16:18:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:06.835 16:18:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:06.835 16:18:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:06.835 1+0 records in 00:09:06.835 1+0 records out 00:09:06.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000457079 s, 9.0 MB/s 00:09:06.835 16:18:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.835 16:18:26 -- common/autotest_common.sh@884 -- # size=4096 00:09:06.835 16:18:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:06.835 16:18:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:06.835 16:18:26 -- common/autotest_common.sh@887 -- # return 0 00:09:06.835 16:18:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:06.835 16:18:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:06.835 16:18:26 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:09:07.093 /dev/nbd14 00:09:07.093 16:18:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:09:07.093 16:18:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:09:07.093 16:18:26 -- common/autotest_common.sh@866 -- # local nbd_name=nbd14 00:09:07.093 16:18:26 -- common/autotest_common.sh@867 -- # local i 00:09:07.093 16:18:26 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:09:07.093 16:18:26 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:09:07.093 16:18:26 -- common/autotest_common.sh@870 -- # grep -q -w nbd14 /proc/partitions 00:09:07.093 16:18:26 -- common/autotest_common.sh@871 -- # break 00:09:07.093 16:18:26 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:09:07.093 16:18:26 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:09:07.093 16:18:26 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:07.093 1+0 records in 00:09:07.093 1+0 records out 00:09:07.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498506 s, 8.2 MB/s 00:09:07.093 16:18:26 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.093 16:18:26 -- common/autotest_common.sh@884 -- # size=4096 00:09:07.093 16:18:26 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:09:07.093 16:18:26 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:09:07.093 16:18:26 -- common/autotest_common.sh@887 -- # return 0 00:09:07.093 16:18:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:07.093 16:18:26 -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:09:07.093 16:18:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:07.093 16:18:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:07.093 16:18:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd0", 00:09:07.352 "bdev_name": "Nvme0n1p1" 00:09:07.352 }, 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd1", 00:09:07.352 "bdev_name": "Nvme0n1p2" 00:09:07.352 }, 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd10", 00:09:07.352 "bdev_name": "Nvme1n1" 00:09:07.352 }, 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd11", 00:09:07.352 "bdev_name": "Nvme2n1" 00:09:07.352 }, 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd12", 00:09:07.352 "bdev_name": "Nvme2n2" 00:09:07.352 }, 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd13", 00:09:07.352 "bdev_name": "Nvme2n3" 00:09:07.352 }, 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd14", 00:09:07.352 "bdev_name": "Nvme3n1" 00:09:07.352 } 00:09:07.352 ]' 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd0", 00:09:07.352 "bdev_name": "Nvme0n1p1" 00:09:07.352 }, 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd1", 00:09:07.352 "bdev_name": "Nvme0n1p2" 00:09:07.352 }, 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd10", 00:09:07.352 "bdev_name": "Nvme1n1" 00:09:07.352 }, 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd11", 00:09:07.352 "bdev_name": "Nvme2n1" 00:09:07.352 }, 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd12", 00:09:07.352 "bdev_name": "Nvme2n2" 00:09:07.352 }, 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd13", 00:09:07.352 "bdev_name": "Nvme2n3" 00:09:07.352 }, 00:09:07.352 { 00:09:07.352 "nbd_device": "/dev/nbd14", 00:09:07.352 "bdev_name": "Nvme3n1" 00:09:07.352 } 00:09:07.352 ]' 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:07.352 /dev/nbd1 00:09:07.352 /dev/nbd10 00:09:07.352 /dev/nbd11 00:09:07.352 /dev/nbd12 00:09:07.352 /dev/nbd13 00:09:07.352 /dev/nbd14' 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:07.352 /dev/nbd1 00:09:07.352 /dev/nbd10 00:09:07.352 /dev/nbd11 00:09:07.352 /dev/nbd12 00:09:07.352 /dev/nbd13 00:09:07.352 /dev/nbd14' 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@65 -- # count=7 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@66 -- # echo 7 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@95 -- # count=7 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:07.352 16:18:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:09:07.352 256+0 records in 00:09:07.352 256+0 records out 00:09:07.352 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0068525 s, 153 MB/s 00:09:07.352 16:18:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:07.352 16:18:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:07.610 256+0 records in 00:09:07.610 256+0 records out 00:09:07.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.1232 s, 8.5 MB/s 00:09:07.610 16:18:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:07.610 16:18:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:07.610 256+0 records in 00:09:07.610 256+0 records out 00:09:07.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.100001 s, 10.5 MB/s 00:09:07.610 16:18:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:07.610 16:18:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:09:07.610 256+0 records in 00:09:07.610 256+0 records out 00:09:07.610 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.123435 s, 8.5 MB/s 00:09:07.610 16:18:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:07.610 16:18:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:09:07.868 256+0 records in 00:09:07.868 256+0 records out 00:09:07.868 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.105679 s, 9.9 MB/s 00:09:07.868 16:18:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:07.868 16:18:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:09:08.126 256+0 records in 00:09:08.126 256+0 records out 00:09:08.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.181999 s, 5.8 MB/s 00:09:08.126 16:18:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:08.126 16:18:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:09:08.126 256+0 records in 00:09:08.126 256+0 records out 00:09:08.126 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.106259 s, 9.9 MB/s 00:09:08.126 16:18:27 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:08.126 16:18:27 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:09:08.385 256+0 records in 00:09:08.385 256+0 records out 00:09:08.385 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.172686 s, 6.1 MB/s 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:08.385 16:18:27 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:09:08.385 16:18:28 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:09:08.385 16:18:28 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:08.385 16:18:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:08.385 16:18:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:08.385 16:18:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:08.385 16:18:28 -- bdev/nbd_common.sh@51 -- # local i 00:09:08.385 16:18:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:08.385 16:18:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:08.643 16:18:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:08.643 16:18:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:08.643 16:18:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:08.643 16:18:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:08.643 16:18:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:08.643 16:18:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:08.643 16:18:28 -- bdev/nbd_common.sh@41 -- # break 00:09:08.643 16:18:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:08.643 16:18:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:08.643 16:18:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@41 -- # break 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@41 -- # break 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:08.900 16:18:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:09:09.158 16:18:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:09:09.158 16:18:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:09:09.158 16:18:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:09:09.158 16:18:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.158 16:18:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.158 16:18:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:09:09.158 16:18:28 -- bdev/nbd_common.sh@41 -- # break 00:09:09.158 16:18:28 -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.158 16:18:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.158 16:18:28 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:09:09.416 16:18:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:09:09.416 16:18:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:09:09.416 16:18:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:09:09.416 16:18:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.416 16:18:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.416 16:18:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:09:09.416 16:18:29 -- bdev/nbd_common.sh@41 -- # break 00:09:09.416 16:18:29 -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.416 16:18:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.416 16:18:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@41 -- # break 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@41 -- # break 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@45 -- # return 0 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.674 16:18:29 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@65 -- # true 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@65 -- # count=0 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@104 -- # count=0 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@109 -- # return 0 00:09:09.932 16:18:29 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:09:09.932 16:18:29 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:09:10.192 malloc_lvol_verify 00:09:10.192 16:18:29 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:09:10.449 6db83877-5a56-443d-89e5-d9605b16228c 00:09:10.449 16:18:30 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:09:10.449 8bd85773-5d2a-48cb-b84c-66ff69349c8f 00:09:10.707 16:18:30 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:09:10.707 /dev/nbd0 00:09:10.707 16:18:30 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:09:10.707 mke2fs 1.47.0 (5-Feb-2023) 00:09:10.707 Discarding device blocks: 0/4096 done 00:09:10.707 Creating filesystem with 4096 1k blocks and 1024 inodes 00:09:10.707 00:09:10.707 Allocating group tables: 0/1 done 00:09:10.707 Writing inode tables: 0/1 done 00:09:10.707 Creating journal (1024 blocks): done 00:09:10.708 Writing superblocks and filesystem accounting information: 0/1 done 00:09:10.708 00:09:10.708 16:18:30 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:09:10.708 16:18:30 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:09:10.708 16:18:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:10.708 16:18:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:10.708 16:18:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:10.708 16:18:30 -- bdev/nbd_common.sh@51 -- # local i 00:09:10.708 16:18:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:10.708 16:18:30 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:10.966 16:18:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:10.966 16:18:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:10.966 16:18:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:10.966 16:18:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:10.966 16:18:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:10.966 16:18:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:10.966 16:18:30 -- bdev/nbd_common.sh@41 -- # break 00:09:10.966 16:18:30 -- bdev/nbd_common.sh@45 -- # return 0 00:09:10.966 16:18:30 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:09:10.966 16:18:30 -- bdev/nbd_common.sh@147 -- # return 0 00:09:10.966 16:18:30 -- bdev/blockdev.sh@324 -- # killprocess 62148 00:09:10.966 16:18:30 -- common/autotest_common.sh@936 -- # '[' -z 62148 ']' 00:09:10.966 16:18:30 -- common/autotest_common.sh@940 -- # kill -0 62148 00:09:10.966 16:18:30 -- common/autotest_common.sh@941 -- # uname 00:09:10.966 16:18:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:10.966 16:18:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62148 00:09:10.966 killing process with pid 62148 00:09:10.966 16:18:30 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:10.966 16:18:30 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:10.966 16:18:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62148' 00:09:10.966 16:18:30 -- common/autotest_common.sh@955 -- # kill 62148 00:09:10.966 16:18:30 -- common/autotest_common.sh@960 -- # wait 62148 00:09:11.900 ************************************ 00:09:11.900 END TEST bdev_nbd 00:09:11.900 ************************************ 00:09:11.900 16:18:31 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:09:11.900 00:09:11.900 real 0m10.310s 00:09:11.900 user 0m14.293s 00:09:11.900 sys 0m3.320s 00:09:11.900 16:18:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.900 16:18:31 -- common/autotest_common.sh@10 -- # set +x 00:09:11.900 16:18:31 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:09:11.900 16:18:31 -- bdev/blockdev.sh@762 -- # '[' gpt = nvme ']' 00:09:11.900 16:18:31 -- bdev/blockdev.sh@762 -- # '[' gpt = gpt ']' 00:09:11.900 skipping fio tests on NVMe due to multi-ns failures. 00:09:11.900 16:18:31 -- bdev/blockdev.sh@764 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:09:11.900 16:18:31 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:09:11.900 16:18:31 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:11.900 16:18:31 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:11.900 16:18:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.900 16:18:31 -- common/autotest_common.sh@10 -- # set +x 00:09:11.900 ************************************ 00:09:11.900 START TEST bdev_verify 00:09:11.900 ************************************ 00:09:11.900 16:18:31 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:09:11.900 [2024-11-09 16:18:31.652447] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:11.900 [2024-11-09 16:18:31.652560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62552 ] 00:09:12.159 [2024-11-09 16:18:31.805178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:12.417 [2024-11-09 16:18:31.999802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.417 [2024-11-09 16:18:31.999872] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.983 Running I/O for 5 seconds... 00:09:18.248 00:09:18.248 Latency(us) 00:09:18.248 [2024-11-09T16:18:38.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x0 length 0x5e800 00:09:18.248 Nvme0n1p1 : 5.04 2632.74 10.28 0.00 0.00 48482.03 7158.55 54848.59 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x5e800 length 0x5e800 00:09:18.248 Nvme0n1p1 : 5.05 2611.30 10.20 0.00 0.00 48859.03 6604.01 60898.07 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x0 length 0x5e7ff 00:09:18.248 Nvme0n1p2 : 5.04 2631.98 10.28 0.00 0.00 48459.98 7007.31 51622.20 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x5e7ff length 0x5e7ff 00:09:18.248 Nvme0n1p2 : 5.05 2618.11 10.23 0.00 0.00 48639.14 4234.63 46984.27 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x0 length 0xa0000 00:09:18.248 Nvme1n1 : 5.05 2631.29 10.28 0.00 0.00 48424.69 7007.31 49000.76 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0xa0000 length 0xa0000 00:09:18.248 Nvme1n1 : 5.06 2616.38 10.22 0.00 0.00 48606.01 6856.07 45572.73 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x0 length 0x80000 00:09:18.248 Nvme2n1 : 5.05 2636.46 10.30 0.00 0.00 48327.47 3806.13 47992.52 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x80000 length 0x80000 00:09:18.248 Nvme2n1 : 5.06 2614.12 10.21 0.00 0.00 48555.19 10132.87 45169.43 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x0 length 0x80000 00:09:18.248 Nvme2n2 : 5.05 2635.68 10.30 0.00 0.00 48288.41 4436.28 48194.17 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x80000 length 0x80000 00:09:18.248 Nvme2n2 : 5.06 2613.50 10.21 0.00 0.00 48523.37 10586.58 45774.38 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x0 length 0x80000 00:09:18.248 Nvme2n3 : 5.05 2634.96 10.29 0.00 0.00 48232.64 4461.49 47992.52 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x80000 length 0x80000 00:09:18.248 Nvme2n3 : 5.06 2612.83 10.21 0.00 0.00 48500.50 11241.94 45774.38 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x0 length 0x20000 00:09:18.248 Nvme3n1 : 5.06 2633.14 10.29 0.00 0.00 48211.00 7007.31 48194.17 00:09:18.248 [2024-11-09T16:18:38.018Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:09:18.248 Verification LBA range: start 0x20000 length 0x20000 00:09:18.248 Nvme3n1 : 5.07 2619.36 10.23 0.00 0.00 48391.99 2117.32 46379.32 00:09:18.248 [2024-11-09T16:18:38.019Z] =================================================================================================================== 00:09:18.249 [2024-11-09T16:18:38.019Z] Total : 36741.83 143.52 0.00 0.00 48463.90 2117.32 60898.07 00:09:23.511 00:09:23.511 real 0m10.896s 00:09:23.511 user 0m20.596s 00:09:23.511 sys 0m0.299s 00:09:23.511 16:18:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:23.511 16:18:42 -- common/autotest_common.sh@10 -- # set +x 00:09:23.511 ************************************ 00:09:23.511 END TEST bdev_verify 00:09:23.511 ************************************ 00:09:23.511 16:18:42 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:23.511 16:18:42 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:09:23.511 16:18:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:23.511 16:18:42 -- common/autotest_common.sh@10 -- # set +x 00:09:23.511 ************************************ 00:09:23.511 START TEST bdev_verify_big_io 00:09:23.511 ************************************ 00:09:23.511 16:18:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:09:23.511 [2024-11-09 16:18:42.587967] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:23.511 [2024-11-09 16:18:42.588073] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62674 ] 00:09:23.511 [2024-11-09 16:18:42.740716] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:23.511 [2024-11-09 16:18:42.916900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.511 [2024-11-09 16:18:42.916967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.076 Running I/O for 5 seconds... 00:09:30.634 00:09:30.634 Latency(us) 00:09:30.634 [2024-11-09T16:18:50.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x0 length 0x5e80 00:09:30.634 Nvme0n1p1 : 5.39 195.31 12.21 0.00 0.00 641388.37 29239.14 1161499.57 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme0n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x5e80 length 0x5e80 00:09:30.634 Nvme0n1p1 : 5.31 295.23 18.45 0.00 0.00 424654.36 69367.34 600108.11 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x0 length 0x5e7f 00:09:30.634 Nvme0n1p2 : 5.42 201.09 12.57 0.00 0.00 612162.90 21979.77 1045349.61 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme0n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x5e7f length 0x5e7f 00:09:30.634 Nvme0n1p2 : 5.31 295.14 18.45 0.00 0.00 420358.86 69770.63 551712.30 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x0 length 0xa000 00:09:30.634 Nvme1n1 : 5.42 201.03 12.56 0.00 0.00 597889.14 22483.89 948557.98 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0xa000 length 0xa000 00:09:30.634 Nvme1n1 : 5.36 300.39 18.77 0.00 0.00 409858.07 48194.17 509769.26 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x0 length 0x8000 00:09:30.634 Nvme2n1 : 5.44 207.70 12.98 0.00 0.00 565996.04 16736.89 838860.80 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x8000 length 0x8000 00:09:30.634 Nvme2n1 : 5.36 300.32 18.77 0.00 0.00 405592.25 48395.82 471052.60 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x0 length 0x8000 00:09:30.634 Nvme2n2 : 5.53 251.47 15.72 0.00 0.00 458852.21 12149.37 745295.56 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x8000 length 0x8000 00:09:30.634 Nvme2n2 : 5.37 309.15 19.32 0.00 0.00 393183.43 2419.79 429109.56 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x0 length 0x8000 00:09:30.634 Nvme2n3 : 5.62 311.08 19.44 0.00 0.00 364740.21 7561.85 703352.52 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x8000 length 0x8000 00:09:30.634 Nvme2n3 : 5.37 317.83 19.86 0.00 0.00 379699.61 2432.39 422656.79 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x0 length 0x2000 00:09:30.634 Nvme3n1 : 5.68 396.68 24.79 0.00 0.00 281973.64 343.43 709805.29 00:09:30.634 [2024-11-09T16:18:50.404Z] Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:09:30.634 Verification LBA range: start 0x2000 length 0x2000 00:09:30.634 Nvme3n1 : 5.37 317.72 19.86 0.00 0.00 375711.51 3251.59 419430.40 00:09:30.634 [2024-11-09T16:18:50.404Z] =================================================================================================================== 00:09:30.634 [2024-11-09T16:18:50.404Z] Total : 3900.14 243.76 0.00 0.00 430704.67 343.43 1161499.57 00:09:31.569 00:09:31.569 real 0m8.461s 00:09:31.569 user 0m15.890s 00:09:31.569 sys 0m0.235s 00:09:31.569 16:18:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:31.569 16:18:50 -- common/autotest_common.sh@10 -- # set +x 00:09:31.569 ************************************ 00:09:31.569 END TEST bdev_verify_big_io 00:09:31.569 ************************************ 00:09:31.569 16:18:51 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:31.569 16:18:51 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:31.569 16:18:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:31.569 16:18:51 -- common/autotest_common.sh@10 -- # set +x 00:09:31.569 ************************************ 00:09:31.569 START TEST bdev_write_zeroes 00:09:31.569 ************************************ 00:09:31.569 16:18:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:31.569 [2024-11-09 16:18:51.093661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:31.569 [2024-11-09 16:18:51.093879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62783 ] 00:09:31.569 [2024-11-09 16:18:51.239143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.827 [2024-11-09 16:18:51.413256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.393 Running I/O for 1 seconds... 00:09:33.323 00:09:33.324 Latency(us) 00:09:33.324 [2024-11-09T16:18:53.094Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.324 [2024-11-09T16:18:53.094Z] Job: Nvme0n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.324 Nvme0n1p1 : 1.02 9696.84 37.88 0.00 0.00 13162.57 6049.48 25004.50 00:09:33.324 [2024-11-09T16:18:53.094Z] Job: Nvme0n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.324 Nvme0n1p2 : 1.02 9682.08 37.82 0.00 0.00 13159.86 6427.57 24903.68 00:09:33.324 [2024-11-09T16:18:53.094Z] Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.324 Nvme1n1 : 1.02 9670.92 37.78 0.00 0.00 13152.65 10586.58 23794.61 00:09:33.324 [2024-11-09T16:18:53.094Z] Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.324 Nvme2n1 : 1.02 9660.07 37.73 0.00 0.00 13110.70 10737.82 21576.47 00:09:33.324 [2024-11-09T16:18:53.094Z] Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.324 Nvme2n2 : 1.02 9649.17 37.69 0.00 0.00 13091.19 10737.82 21576.47 00:09:33.324 [2024-11-09T16:18:53.094Z] Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.324 Nvme2n3 : 1.02 9638.33 37.65 0.00 0.00 13085.08 10637.00 21374.82 00:09:33.324 [2024-11-09T16:18:53.094Z] Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:09:33.324 Nvme3n1 : 1.02 9686.19 37.84 0.00 0.00 13009.07 8217.21 21273.99 00:09:33.324 [2024-11-09T16:18:53.094Z] =================================================================================================================== 00:09:33.324 [2024-11-09T16:18:53.094Z] Total : 67683.60 264.39 0.00 0.00 13110.07 6049.48 25004.50 00:09:34.260 00:09:34.260 real 0m2.766s 00:09:34.260 user 0m2.470s 00:09:34.260 sys 0m0.183s 00:09:34.260 ************************************ 00:09:34.260 END TEST bdev_write_zeroes 00:09:34.260 ************************************ 00:09:34.260 16:18:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:34.260 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:09:34.260 16:18:53 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:34.260 16:18:53 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:34.260 16:18:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.260 16:18:53 -- common/autotest_common.sh@10 -- # set +x 00:09:34.260 ************************************ 00:09:34.260 START TEST bdev_json_nonenclosed 00:09:34.260 ************************************ 00:09:34.260 16:18:53 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:34.260 [2024-11-09 16:18:53.897988] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:34.260 [2024-11-09 16:18:53.898092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62837 ] 00:09:34.518 [2024-11-09 16:18:54.045972] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:34.518 [2024-11-09 16:18:54.215020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.518 [2024-11-09 16:18:54.215154] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:09:34.518 [2024-11-09 16:18:54.215177] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:34.776 00:09:34.776 real 0m0.652s 00:09:34.776 user 0m0.455s 00:09:34.776 sys 0m0.092s 00:09:34.776 ************************************ 00:09:34.776 END TEST bdev_json_nonenclosed 00:09:34.776 ************************************ 00:09:34.776 16:18:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:34.776 16:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:34.776 16:18:54 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:34.776 16:18:54 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:34.776 16:18:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:34.776 16:18:54 -- common/autotest_common.sh@10 -- # set +x 00:09:34.776 ************************************ 00:09:34.776 START TEST bdev_json_nonarray 00:09:34.776 ************************************ 00:09:34.776 16:18:54 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:09:35.034 [2024-11-09 16:18:54.594413] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:35.034 [2024-11-09 16:18:54.594708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62862 ] 00:09:35.034 [2024-11-09 16:18:54.742580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.291 [2024-11-09 16:18:54.911214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.291 [2024-11-09 16:18:54.911373] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:09:35.291 [2024-11-09 16:18:54.911396] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:35.550 00:09:35.550 real 0m0.659s 00:09:35.550 user 0m0.454s 00:09:35.550 sys 0m0.101s 00:09:35.550 16:18:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:35.550 16:18:55 -- common/autotest_common.sh@10 -- # set +x 00:09:35.550 ************************************ 00:09:35.550 END TEST bdev_json_nonarray 00:09:35.550 ************************************ 00:09:35.550 16:18:55 -- bdev/blockdev.sh@785 -- # [[ gpt == bdev ]] 00:09:35.550 16:18:55 -- bdev/blockdev.sh@792 -- # [[ gpt == gpt ]] 00:09:35.550 16:18:55 -- bdev/blockdev.sh@793 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:09:35.550 16:18:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:35.550 16:18:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:35.550 16:18:55 -- common/autotest_common.sh@10 -- # set +x 00:09:35.550 ************************************ 00:09:35.550 START TEST bdev_gpt_uuid 00:09:35.550 ************************************ 00:09:35.550 16:18:55 -- common/autotest_common.sh@1114 -- # bdev_gpt_uuid 00:09:35.550 16:18:55 -- bdev/blockdev.sh@612 -- # local bdev 00:09:35.550 16:18:55 -- bdev/blockdev.sh@614 -- # start_spdk_tgt 00:09:35.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:35.550 16:18:55 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=62888 00:09:35.550 16:18:55 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:35.550 16:18:55 -- bdev/blockdev.sh@47 -- # waitforlisten 62888 00:09:35.550 16:18:55 -- common/autotest_common.sh@829 -- # '[' -z 62888 ']' 00:09:35.550 16:18:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:35.550 16:18:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.550 16:18:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:35.550 16:18:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.550 16:18:55 -- common/autotest_common.sh@10 -- # set +x 00:09:35.550 16:18:55 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:09:35.550 [2024-11-09 16:18:55.309096] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:35.550 [2024-11-09 16:18:55.309212] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62888 ] 00:09:35.808 [2024-11-09 16:18:55.461565] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.067 [2024-11-09 16:18:55.633162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:36.067 [2024-11-09 16:18:55.633387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.449 16:18:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.449 16:18:56 -- common/autotest_common.sh@862 -- # return 0 00:09:37.449 16:18:56 -- bdev/blockdev.sh@616 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:37.449 16:18:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.449 16:18:56 -- common/autotest_common.sh@10 -- # set +x 00:09:37.449 Some configs were skipped because the RPC state that can call them passed over. 00:09:37.449 16:18:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.449 16:18:57 -- bdev/blockdev.sh@617 -- # rpc_cmd bdev_wait_for_examine 00:09:37.449 16:18:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.449 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:09:37.449 16:18:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.449 16:18:57 -- bdev/blockdev.sh@619 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:09:37.449 16:18:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.449 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:09:37.449 16:18:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.449 16:18:57 -- bdev/blockdev.sh@619 -- # bdev='[ 00:09:37.449 { 00:09:37.449 "name": "Nvme0n1p1", 00:09:37.449 "aliases": [ 00:09:37.449 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:09:37.449 ], 00:09:37.449 "product_name": "GPT Disk", 00:09:37.449 "block_size": 4096, 00:09:37.449 "num_blocks": 774144, 00:09:37.449 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:37.449 "md_size": 64, 00:09:37.449 "md_interleave": false, 00:09:37.449 "dif_type": 0, 00:09:37.449 "assigned_rate_limits": { 00:09:37.449 "rw_ios_per_sec": 0, 00:09:37.449 "rw_mbytes_per_sec": 0, 00:09:37.449 "r_mbytes_per_sec": 0, 00:09:37.449 "w_mbytes_per_sec": 0 00:09:37.449 }, 00:09:37.449 "claimed": false, 00:09:37.449 "zoned": false, 00:09:37.449 "supported_io_types": { 00:09:37.449 "read": true, 00:09:37.449 "write": true, 00:09:37.449 "unmap": true, 00:09:37.449 "write_zeroes": true, 00:09:37.449 "flush": true, 00:09:37.449 "reset": true, 00:09:37.449 "compare": true, 00:09:37.449 "compare_and_write": false, 00:09:37.449 "abort": true, 00:09:37.449 "nvme_admin": false, 00:09:37.449 "nvme_io": false 00:09:37.449 }, 00:09:37.449 "driver_specific": { 00:09:37.449 "gpt": { 00:09:37.449 "base_bdev": "Nvme0n1", 00:09:37.449 "offset_blocks": 256, 00:09:37.449 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:09:37.449 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:09:37.449 "partition_name": "SPDK_TEST_first" 00:09:37.449 } 00:09:37.449 } 00:09:37.449 } 00:09:37.449 ]' 00:09:37.449 16:18:57 -- bdev/blockdev.sh@620 -- # jq -r length 00:09:37.449 16:18:57 -- bdev/blockdev.sh@620 -- # [[ 1 == \1 ]] 00:09:37.449 16:18:57 -- bdev/blockdev.sh@621 -- # jq -r '.[0].aliases[0]' 00:09:37.449 16:18:57 -- bdev/blockdev.sh@621 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:37.449 16:18:57 -- bdev/blockdev.sh@622 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:37.711 16:18:57 -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:09:37.711 16:18:57 -- bdev/blockdev.sh@624 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:09:37.711 16:18:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.711 16:18:57 -- common/autotest_common.sh@10 -- # set +x 00:09:37.711 16:18:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.711 16:18:57 -- bdev/blockdev.sh@624 -- # bdev='[ 00:09:37.711 { 00:09:37.711 "name": "Nvme0n1p2", 00:09:37.711 "aliases": [ 00:09:37.711 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:09:37.711 ], 00:09:37.711 "product_name": "GPT Disk", 00:09:37.711 "block_size": 4096, 00:09:37.711 "num_blocks": 774143, 00:09:37.711 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:37.711 "md_size": 64, 00:09:37.711 "md_interleave": false, 00:09:37.711 "dif_type": 0, 00:09:37.711 "assigned_rate_limits": { 00:09:37.711 "rw_ios_per_sec": 0, 00:09:37.711 "rw_mbytes_per_sec": 0, 00:09:37.711 "r_mbytes_per_sec": 0, 00:09:37.711 "w_mbytes_per_sec": 0 00:09:37.711 }, 00:09:37.711 "claimed": false, 00:09:37.711 "zoned": false, 00:09:37.711 "supported_io_types": { 00:09:37.711 "read": true, 00:09:37.711 "write": true, 00:09:37.711 "unmap": true, 00:09:37.711 "write_zeroes": true, 00:09:37.711 "flush": true, 00:09:37.711 "reset": true, 00:09:37.711 "compare": true, 00:09:37.711 "compare_and_write": false, 00:09:37.711 "abort": true, 00:09:37.711 "nvme_admin": false, 00:09:37.711 "nvme_io": false 00:09:37.711 }, 00:09:37.711 "driver_specific": { 00:09:37.711 "gpt": { 00:09:37.711 "base_bdev": "Nvme0n1", 00:09:37.711 "offset_blocks": 774400, 00:09:37.711 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:09:37.711 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:09:37.711 "partition_name": "SPDK_TEST_second" 00:09:37.711 } 00:09:37.711 } 00:09:37.711 } 00:09:37.711 ]' 00:09:37.711 16:18:57 -- bdev/blockdev.sh@625 -- # jq -r length 00:09:37.711 16:18:57 -- bdev/blockdev.sh@625 -- # [[ 1 == \1 ]] 00:09:37.711 16:18:57 -- bdev/blockdev.sh@626 -- # jq -r '.[0].aliases[0]' 00:09:37.711 16:18:57 -- bdev/blockdev.sh@626 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:37.711 16:18:57 -- bdev/blockdev.sh@627 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:09:37.711 16:18:57 -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:09:37.711 16:18:57 -- bdev/blockdev.sh@629 -- # killprocess 62888 00:09:37.711 16:18:57 -- common/autotest_common.sh@936 -- # '[' -z 62888 ']' 00:09:37.711 16:18:57 -- common/autotest_common.sh@940 -- # kill -0 62888 00:09:37.711 16:18:57 -- common/autotest_common.sh@941 -- # uname 00:09:37.711 16:18:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:37.711 16:18:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62888 00:09:37.711 killing process with pid 62888 00:09:37.711 16:18:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:37.711 16:18:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:37.711 16:18:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62888' 00:09:37.711 16:18:57 -- common/autotest_common.sh@955 -- # kill 62888 00:09:37.711 16:18:57 -- common/autotest_common.sh@960 -- # wait 62888 00:09:39.091 ************************************ 00:09:39.091 END TEST bdev_gpt_uuid 00:09:39.091 ************************************ 00:09:39.091 00:09:39.091 real 0m3.509s 00:09:39.091 user 0m3.796s 00:09:39.091 sys 0m0.378s 00:09:39.091 16:18:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:39.091 16:18:58 -- common/autotest_common.sh@10 -- # set +x 00:09:39.091 16:18:58 -- bdev/blockdev.sh@796 -- # [[ gpt == crypto_sw ]] 00:09:39.091 16:18:58 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:39.091 16:18:58 -- bdev/blockdev.sh@809 -- # cleanup 00:09:39.091 16:18:58 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:39.091 16:18:58 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:39.091 16:18:58 -- bdev/blockdev.sh@24 -- # [[ gpt == rbd ]] 00:09:39.091 16:18:58 -- bdev/blockdev.sh@28 -- # [[ gpt == daos ]] 00:09:39.091 16:18:58 -- bdev/blockdev.sh@32 -- # [[ gpt = \g\p\t ]] 00:09:39.091 16:18:58 -- bdev/blockdev.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:39.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:39.661 Waiting for block devices as requested 00:09:39.661 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.661 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.922 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:09:39.922 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:09:45.211 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:09:45.211 16:19:04 -- bdev/blockdev.sh@34 -- # [[ -b /dev/nvme2n1 ]] 00:09:45.211 16:19:04 -- bdev/blockdev.sh@35 -- # wipefs --all /dev/nvme2n1 00:09:45.211 /dev/nvme2n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:45.211 /dev/nvme2n1: 8 bytes were erased at offset 0x17a179000 (gpt): 45 46 49 20 50 41 52 54 00:09:45.211 /dev/nvme2n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:45.211 /dev/nvme2n1: calling ioctl to re-read partition table: Success 00:09:45.211 16:19:04 -- bdev/blockdev.sh@38 -- # [[ gpt == xnvme ]] 00:09:45.211 00:09:45.211 real 0m59.852s 00:09:45.211 user 1m17.801s 00:09:45.211 sys 0m7.801s 00:09:45.211 16:19:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:45.211 ************************************ 00:09:45.211 END TEST blockdev_nvme_gpt 00:09:45.211 ************************************ 00:09:45.211 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:09:45.211 16:19:04 -- spdk/autotest.sh@209 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:45.211 16:19:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:45.211 16:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.211 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:09:45.211 ************************************ 00:09:45.211 START TEST nvme 00:09:45.211 ************************************ 00:09:45.211 16:19:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:45.472 * Looking for test storage... 00:09:45.472 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:45.472 16:19:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:45.472 16:19:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:45.472 16:19:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:45.472 16:19:05 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:45.472 16:19:05 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:45.472 16:19:05 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:45.472 16:19:05 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:45.472 16:19:05 -- scripts/common.sh@335 -- # IFS=.-: 00:09:45.472 16:19:05 -- scripts/common.sh@335 -- # read -ra ver1 00:09:45.472 16:19:05 -- scripts/common.sh@336 -- # IFS=.-: 00:09:45.472 16:19:05 -- scripts/common.sh@336 -- # read -ra ver2 00:09:45.472 16:19:05 -- scripts/common.sh@337 -- # local 'op=<' 00:09:45.472 16:19:05 -- scripts/common.sh@339 -- # ver1_l=2 00:09:45.472 16:19:05 -- scripts/common.sh@340 -- # ver2_l=1 00:09:45.472 16:19:05 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:45.472 16:19:05 -- scripts/common.sh@343 -- # case "$op" in 00:09:45.472 16:19:05 -- scripts/common.sh@344 -- # : 1 00:09:45.472 16:19:05 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:45.472 16:19:05 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:45.472 16:19:05 -- scripts/common.sh@364 -- # decimal 1 00:09:45.472 16:19:05 -- scripts/common.sh@352 -- # local d=1 00:09:45.472 16:19:05 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:45.472 16:19:05 -- scripts/common.sh@354 -- # echo 1 00:09:45.472 16:19:05 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:45.472 16:19:05 -- scripts/common.sh@365 -- # decimal 2 00:09:45.472 16:19:05 -- scripts/common.sh@352 -- # local d=2 00:09:45.472 16:19:05 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:45.473 16:19:05 -- scripts/common.sh@354 -- # echo 2 00:09:45.473 16:19:05 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:45.473 16:19:05 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:45.473 16:19:05 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:45.473 16:19:05 -- scripts/common.sh@367 -- # return 0 00:09:45.473 16:19:05 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:45.473 16:19:05 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:45.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.473 --rc genhtml_branch_coverage=1 00:09:45.473 --rc genhtml_function_coverage=1 00:09:45.473 --rc genhtml_legend=1 00:09:45.473 --rc geninfo_all_blocks=1 00:09:45.473 --rc geninfo_unexecuted_blocks=1 00:09:45.473 00:09:45.473 ' 00:09:45.473 16:19:05 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:45.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.473 --rc genhtml_branch_coverage=1 00:09:45.473 --rc genhtml_function_coverage=1 00:09:45.473 --rc genhtml_legend=1 00:09:45.473 --rc geninfo_all_blocks=1 00:09:45.473 --rc geninfo_unexecuted_blocks=1 00:09:45.473 00:09:45.473 ' 00:09:45.473 16:19:05 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:45.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.473 --rc genhtml_branch_coverage=1 00:09:45.473 --rc genhtml_function_coverage=1 00:09:45.473 --rc genhtml_legend=1 00:09:45.473 --rc geninfo_all_blocks=1 00:09:45.473 --rc geninfo_unexecuted_blocks=1 00:09:45.473 00:09:45.473 ' 00:09:45.473 16:19:05 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:45.473 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:45.473 --rc genhtml_branch_coverage=1 00:09:45.473 --rc genhtml_function_coverage=1 00:09:45.473 --rc genhtml_legend=1 00:09:45.473 --rc geninfo_all_blocks=1 00:09:45.473 --rc geninfo_unexecuted_blocks=1 00:09:45.473 00:09:45.473 ' 00:09:45.473 16:19:05 -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:46.413 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:46.413 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.413 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.413 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.413 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:09:46.413 16:19:06 -- nvme/nvme.sh@79 -- # uname 00:09:46.413 16:19:06 -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:46.413 16:19:06 -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:46.413 16:19:06 -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:46.413 16:19:06 -- common/autotest_common.sh@1068 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:46.413 16:19:06 -- common/autotest_common.sh@1054 -- # _randomize_va_space=2 00:09:46.413 16:19:06 -- common/autotest_common.sh@1055 -- # echo 0 00:09:46.413 Waiting for stub to ready for secondary processes... 00:09:46.413 16:19:06 -- common/autotest_common.sh@1057 -- # stubpid=63564 00:09:46.413 16:19:06 -- common/autotest_common.sh@1056 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:46.413 16:19:06 -- common/autotest_common.sh@1058 -- # echo Waiting for stub to ready for secondary processes... 00:09:46.413 16:19:06 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:46.413 16:19:06 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63564 ]] 00:09:46.413 16:19:06 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:46.413 [2024-11-09 16:19:06.175827] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:46.413 [2024-11-09 16:19:06.175930] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.354 [2024-11-09 16:19:06.924154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:47.354 [2024-11-09 16:19:07.119925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.354 [2024-11-09 16:19:07.120212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.354 [2024-11-09 16:19:07.120219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.615 [2024-11-09 16:19:07.140277] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:47.615 16:19:07 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:47.615 16:19:07 -- common/autotest_common.sh@1061 -- # [[ -e /proc/63564 ]] 00:09:47.615 16:19:07 -- common/autotest_common.sh@1062 -- # sleep 1s 00:09:47.615 [2024-11-09 16:19:07.152863] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:47.615 [2024-11-09 16:19:07.153181] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:47.615 [2024-11-09 16:19:07.166171] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:47.615 [2024-11-09 16:19:07.166492] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:47.615 [2024-11-09 16:19:07.166607] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:47.615 [2024-11-09 16:19:07.174278] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:47.615 [2024-11-09 16:19:07.174412] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:47.615 [2024-11-09 16:19:07.174507] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:47.615 [2024-11-09 16:19:07.181723] nvme_cuse.c:1142:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:47.616 [2024-11-09 16:19:07.181848] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:47.616 [2024-11-09 16:19:07.181946] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:47.616 [2024-11-09 16:19:07.182027] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:47.616 [2024-11-09 16:19:07.182137] nvme_cuse.c: 910:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:48.555 16:19:08 -- common/autotest_common.sh@1059 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:48.555 done. 00:09:48.555 16:19:08 -- common/autotest_common.sh@1064 -- # echo done. 00:09:48.555 16:19:08 -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:48.555 16:19:08 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:09:48.555 16:19:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.555 16:19:08 -- common/autotest_common.sh@10 -- # set +x 00:09:48.555 ************************************ 00:09:48.555 START TEST nvme_reset 00:09:48.555 ************************************ 00:09:48.555 16:19:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:48.816 Initializing NVMe Controllers 00:09:48.816 Skipping QEMU NVMe SSD at 0000:00:06.0 00:09:48.816 Skipping QEMU NVMe SSD at 0000:00:07.0 00:09:48.816 Skipping QEMU NVMe SSD at 0000:00:09.0 00:09:48.816 Skipping QEMU NVMe SSD at 0000:00:08.0 00:09:48.816 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:48.816 00:09:48.816 real 0m0.208s 00:09:48.816 user 0m0.059s 00:09:48.816 sys 0m0.104s 00:09:48.816 ************************************ 00:09:48.816 END TEST nvme_reset 00:09:48.816 ************************************ 00:09:48.816 16:19:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:48.816 16:19:08 -- common/autotest_common.sh@10 -- # set +x 00:09:48.816 16:19:08 -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:48.817 16:19:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:48.817 16:19:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:48.817 16:19:08 -- common/autotest_common.sh@10 -- # set +x 00:09:48.817 ************************************ 00:09:48.817 START TEST nvme_identify 00:09:48.817 ************************************ 00:09:48.817 16:19:08 -- common/autotest_common.sh@1114 -- # nvme_identify 00:09:48.817 16:19:08 -- nvme/nvme.sh@12 -- # bdfs=() 00:09:48.817 16:19:08 -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:48.817 16:19:08 -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:48.817 16:19:08 -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:48.817 16:19:08 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:48.817 16:19:08 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:48.817 16:19:08 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:48.817 16:19:08 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:48.817 16:19:08 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:48.817 16:19:08 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:48.817 16:19:08 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:48.817 16:19:08 -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:49.081 [2024-11-09 16:19:08.639805] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:06.0] process 63601 terminated unexpected 00:09:49.081 ===================================================== 00:09:49.081 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:49.081 ===================================================== 00:09:49.081 Controller Capabilities/Features 00:09:49.081 ================================ 00:09:49.081 Vendor ID: 1b36 00:09:49.081 Subsystem Vendor ID: 1af4 00:09:49.081 Serial Number: 12340 00:09:49.081 Model Number: QEMU NVMe Ctrl 00:09:49.081 Firmware Version: 8.0.0 00:09:49.081 Recommended Arb Burst: 6 00:09:49.081 IEEE OUI Identifier: 00 54 52 00:09:49.081 Multi-path I/O 00:09:49.081 May have multiple subsystem ports: No 00:09:49.081 May have multiple controllers: No 00:09:49.081 Associated with SR-IOV VF: No 00:09:49.081 Max Data Transfer Size: 524288 00:09:49.081 Max Number of Namespaces: 256 00:09:49.081 Max Number of I/O Queues: 64 00:09:49.081 NVMe Specification Version (VS): 1.4 00:09:49.081 NVMe Specification Version (Identify): 1.4 00:09:49.081 Maximum Queue Entries: 2048 00:09:49.081 Contiguous Queues Required: Yes 00:09:49.081 Arbitration Mechanisms Supported 00:09:49.081 Weighted Round Robin: Not Supported 00:09:49.081 Vendor Specific: Not Supported 00:09:49.081 Reset Timeout: 7500 ms 00:09:49.081 Doorbell Stride: 4 bytes 00:09:49.081 NVM Subsystem Reset: Not Supported 00:09:49.081 Command Sets Supported 00:09:49.081 NVM Command Set: Supported 00:09:49.081 Boot Partition: Not Supported 00:09:49.081 Memory Page Size Minimum: 4096 bytes 00:09:49.081 Memory Page Size Maximum: 65536 bytes 00:09:49.081 Persistent Memory Region: Not Supported 00:09:49.081 Optional Asynchronous Events Supported 00:09:49.081 Namespace Attribute Notices: Supported 00:09:49.081 Firmware Activation Notices: Not Supported 00:09:49.081 ANA Change Notices: Not Supported 00:09:49.081 PLE Aggregate Log Change Notices: Not Supported 00:09:49.081 LBA Status Info Alert Notices: Not Supported 00:09:49.081 EGE Aggregate Log Change Notices: Not Supported 00:09:49.081 Normal NVM Subsystem Shutdown event: Not Supported 00:09:49.081 Zone Descriptor Change Notices: Not Supported 00:09:49.081 Discovery Log Change Notices: Not Supported 00:09:49.081 Controller Attributes 00:09:49.081 128-bit Host Identifier: Not Supported 00:09:49.081 Non-Operational Permissive Mode: Not Supported 00:09:49.081 NVM Sets: Not Supported 00:09:49.081 Read Recovery Levels: Not Supported 00:09:49.081 Endurance Groups: Not Supported 00:09:49.081 Predictable Latency Mode: Not Supported 00:09:49.081 Traffic Based Keep ALive: Not Supported 00:09:49.081 Namespace Granularity: Not Supported 00:09:49.081 SQ Associations: Not Supported 00:09:49.081 UUID List: Not Supported 00:09:49.081 Multi-Domain Subsystem: Not Supported 00:09:49.081 Fixed Capacity Management: Not Supported 00:09:49.081 Variable Capacity Management: Not Supported 00:09:49.081 Delete Endurance Group: Not Supported 00:09:49.081 Delete NVM Set: Not Supported 00:09:49.081 Extended LBA Formats Supported: Supported 00:09:49.081 Flexible Data Placement Supported: Not Supported 00:09:49.081 00:09:49.081 Controller Memory Buffer Support 00:09:49.081 ================================ 00:09:49.081 Supported: No 00:09:49.081 00:09:49.081 Persistent Memory Region Support 00:09:49.081 ================================ 00:09:49.081 Supported: No 00:09:49.081 00:09:49.081 Admin Command Set Attributes 00:09:49.081 ============================ 00:09:49.081 Security Send/Receive: Not Supported 00:09:49.081 Format NVM: Supported 00:09:49.081 Firmware Activate/Download: Not Supported 00:09:49.081 Namespace Management: Supported 00:09:49.081 Device Self-Test: Not Supported 00:09:49.081 Directives: Supported 00:09:49.081 NVMe-MI: Not Supported 00:09:49.081 Virtualization Management: Not Supported 00:09:49.081 Doorbell Buffer Config: Supported 00:09:49.081 Get LBA Status Capability: Not Supported 00:09:49.081 Command & Feature Lockdown Capability: Not Supported 00:09:49.081 Abort Command Limit: 4 00:09:49.081 Async Event Request Limit: 4 00:09:49.081 Number of Firmware Slots: N/A 00:09:49.081 Firmware Slot 1 Read-Only: N/A 00:09:49.081 Firmware Activation Without Reset: N/A 00:09:49.081 Multiple Update Detection Support: N/A 00:09:49.081 Firmware Update Granularity: No Information Provided 00:09:49.081 Per-Namespace SMART Log: Yes 00:09:49.081 Asymmetric Namespace Access Log Page: Not Supported 00:09:49.081 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:49.081 Command Effects Log Page: Supported 00:09:49.081 Get Log Page Extended Data: Supported 00:09:49.081 Telemetry Log Pages: Not Supported 00:09:49.081 Persistent Event Log Pages: Not Supported 00:09:49.081 Supported Log Pages Log Page: May Support 00:09:49.081 Commands Supported & Effects Log Page: Not Supported 00:09:49.081 Feature Identifiers & Effects Log Page:May Support 00:09:49.081 NVMe-MI Commands & Effects Log Page: May Support 00:09:49.081 Data Area 4 for Telemetry Log: Not Supported 00:09:49.081 Error Log Page Entries Supported: 1 00:09:49.081 Keep Alive: Not Supported 00:09:49.081 00:09:49.081 NVM Command Set Attributes 00:09:49.081 ========================== 00:09:49.081 Submission Queue Entry Size 00:09:49.081 Max: 64 00:09:49.081 Min: 64 00:09:49.081 Completion Queue Entry Size 00:09:49.082 Max: 16 00:09:49.082 Min: 16 00:09:49.082 Number of Namespaces: 256 00:09:49.082 Compare Command: Supported 00:09:49.082 Write Uncorrectable Command: Not Supported 00:09:49.082 Dataset Management Command: Supported 00:09:49.082 Write Zeroes Command: Supported 00:09:49.082 Set Features Save Field: Supported 00:09:49.082 Reservations: Not Supported 00:09:49.082 Timestamp: Supported 00:09:49.082 Copy: Supported 00:09:49.082 Volatile Write Cache: Present 00:09:49.082 Atomic Write Unit (Normal): 1 00:09:49.082 Atomic Write Unit (PFail): 1 00:09:49.082 Atomic Compare & Write Unit: 1 00:09:49.082 Fused Compare & Write: Not Supported 00:09:49.082 Scatter-Gather List 00:09:49.082 SGL Command Set: Supported 00:09:49.082 SGL Keyed: Not Supported 00:09:49.082 SGL Bit Bucket Descriptor: Not Supported 00:09:49.082 SGL Metadata Pointer: Not Supported 00:09:49.082 Oversized SGL: Not Supported 00:09:49.082 SGL Metadata Address: Not Supported 00:09:49.082 SGL Offset: Not Supported 00:09:49.082 Transport SGL Data Block: Not Supported 00:09:49.082 Replay Protected Memory Block: Not Supported 00:09:49.082 00:09:49.082 Firmware Slot Information 00:09:49.082 ========================= 00:09:49.082 Active slot: 1 00:09:49.082 Slot 1 Firmware Revision: 1.0 00:09:49.082 00:09:49.082 00:09:49.082 Commands Supported and Effects 00:09:49.082 ============================== 00:09:49.082 Admin Commands 00:09:49.082 -------------- 00:09:49.082 Delete I/O Submission Queue (00h): Supported 00:09:49.082 Create I/O Submission Queue (01h): Supported 00:09:49.082 Get Log Page (02h): Supported 00:09:49.082 Delete I/O Completion Queue (04h): Supported 00:09:49.082 Create I/O Completion Queue (05h): Supported 00:09:49.082 Identify (06h): Supported 00:09:49.082 Abort (08h): Supported 00:09:49.082 Set Features (09h): Supported 00:09:49.082 Get Features (0Ah): Supported 00:09:49.082 Asynchronous Event Request (0Ch): Supported 00:09:49.082 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:49.082 Directive Send (19h): Supported 00:09:49.082 Directive Receive (1Ah): Supported 00:09:49.082 Virtualization Management (1Ch): Supported 00:09:49.082 Doorbell Buffer Config (7Ch): Supported 00:09:49.082 Format NVM (80h): Supported LBA-Change 00:09:49.082 I/O Commands 00:09:49.082 ------------ 00:09:49.082 Flush (00h): Supported LBA-Change 00:09:49.082 Write (01h): Supported LBA-Change 00:09:49.082 Read (02h): Supported 00:09:49.082 Compare (05h): Supported 00:09:49.082 Write Zeroes (08h): Supported LBA-Change 00:09:49.082 Dataset Management (09h): Supported LBA-Change 00:09:49.082 Unknown (0Ch): Supported 00:09:49.082 Unknown (12h): Supported 00:09:49.082 Copy (19h): Supported LBA-Change 00:09:49.082 Unknown (1Dh): Supported LBA-Change 00:09:49.082 00:09:49.082 Error Log 00:09:49.082 ========= 00:09:49.082 00:09:49.082 Arbitration 00:09:49.082 =========== 00:09:49.082 Arbitration Burst: no limit 00:09:49.082 00:09:49.082 Power Management 00:09:49.082 ================ 00:09:49.082 Number of Power States: 1 00:09:49.082 Current Power State: Power State #0 00:09:49.082 Power State #0: 00:09:49.082 Max Power: 25.00 W 00:09:49.082 Non-Operational State: Operational 00:09:49.082 Entry Latency: 16 microseconds 00:09:49.082 Exit Latency: 4 microseconds 00:09:49.082 Relative Read Throughput: 0 00:09:49.082 Relative Read Latency: 0 00:09:49.082 Relative Write Throughput: 0 00:09:49.082 Relative Write Latency: 0 00:09:49.082 Idle Power[2024-11-09 16:19:08.642148] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:07.0] process 63601 terminated unexpected 00:09:49.082 : Not Reported 00:09:49.082 Active Power: Not Reported 00:09:49.082 Non-Operational Permissive Mode: Not Supported 00:09:49.082 00:09:49.082 Health Information 00:09:49.082 ================== 00:09:49.082 Critical Warnings: 00:09:49.082 Available Spare Space: OK 00:09:49.082 Temperature: OK 00:09:49.082 Device Reliability: OK 00:09:49.082 Read Only: No 00:09:49.082 Volatile Memory Backup: OK 00:09:49.082 Current Temperature: 323 Kelvin (50 Celsius) 00:09:49.082 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:49.082 Available Spare: 0% 00:09:49.082 Available Spare Threshold: 0% 00:09:49.082 Life Percentage Used: 0% 00:09:49.082 Data Units Read: 1791 00:09:49.082 Data Units Written: 824 00:09:49.082 Host Read Commands: 86471 00:09:49.082 Host Write Commands: 42934 00:09:49.082 Controller Busy Time: 0 minutes 00:09:49.082 Power Cycles: 0 00:09:49.082 Power On Hours: 0 hours 00:09:49.082 Unsafe Shutdowns: 0 00:09:49.082 Unrecoverable Media Errors: 0 00:09:49.082 Lifetime Error Log Entries: 0 00:09:49.082 Warning Temperature Time: 0 minutes 00:09:49.082 Critical Temperature Time: 0 minutes 00:09:49.082 00:09:49.082 Number of Queues 00:09:49.082 ================ 00:09:49.082 Number of I/O Submission Queues: 64 00:09:49.082 Number of I/O Completion Queues: 64 00:09:49.082 00:09:49.082 ZNS Specific Controller Data 00:09:49.082 ============================ 00:09:49.082 Zone Append Size Limit: 0 00:09:49.082 00:09:49.082 00:09:49.082 Active Namespaces 00:09:49.082 ================= 00:09:49.082 Namespace ID:1 00:09:49.082 Error Recovery Timeout: Unlimited 00:09:49.082 Command Set Identifier: NVM (00h) 00:09:49.082 Deallocate: Supported 00:09:49.082 Deallocated/Unwritten Error: Supported 00:09:49.082 Deallocated Read Value: All 0x00 00:09:49.082 Deallocate in Write Zeroes: Not Supported 00:09:49.082 Deallocated Guard Field: 0xFFFF 00:09:49.082 Flush: Supported 00:09:49.082 Reservation: Not Supported 00:09:49.082 Metadata Transferred as: Separate Metadata Buffer 00:09:49.082 Namespace Sharing Capabilities: Private 00:09:49.082 Size (in LBAs): 1548666 (5GiB) 00:09:49.082 Capacity (in LBAs): 1548666 (5GiB) 00:09:49.082 Utilization (in LBAs): 1548666 (5GiB) 00:09:49.082 Thin Provisioning: Not Supported 00:09:49.082 Per-NS Atomic Units: No 00:09:49.082 Maximum Single Source Range Length: 128 00:09:49.082 Maximum Copy Length: 128 00:09:49.082 Maximum Source Range Count: 128 00:09:49.082 NGUID/EUI64 Never Reused: No 00:09:49.082 Namespace Write Protected: No 00:09:49.082 Number of LBA Formats: 8 00:09:49.082 Current LBA Format: LBA Format #07 00:09:49.082 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.082 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.082 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.082 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.082 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.082 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.082 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.082 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.082 00:09:49.082 ===================================================== 00:09:49.082 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:49.082 ===================================================== 00:09:49.082 Controller Capabilities/Features 00:09:49.082 ================================ 00:09:49.082 Vendor ID: 1b36 00:09:49.082 Subsystem Vendor ID: 1af4 00:09:49.082 Serial Number: 12341 00:09:49.082 Model Number: QEMU NVMe Ctrl 00:09:49.082 Firmware Version: 8.0.0 00:09:49.082 Recommended Arb Burst: 6 00:09:49.082 IEEE OUI Identifier: 00 54 52 00:09:49.082 Multi-path I/O 00:09:49.082 May have multiple subsystem ports: No 00:09:49.082 May have multiple controllers: No 00:09:49.082 Associated with SR-IOV VF: No 00:09:49.082 Max Data Transfer Size: 524288 00:09:49.082 Max Number of Namespaces: 256 00:09:49.082 Max Number of I/O Queues: 64 00:09:49.082 NVMe Specification Version (VS): 1.4 00:09:49.082 NVMe Specification Version (Identify): 1.4 00:09:49.082 Maximum Queue Entries: 2048 00:09:49.082 Contiguous Queues Required: Yes 00:09:49.082 Arbitration Mechanisms Supported 00:09:49.082 Weighted Round Robin: Not Supported 00:09:49.082 Vendor Specific: Not Supported 00:09:49.082 Reset Timeout: 7500 ms 00:09:49.082 Doorbell Stride: 4 bytes 00:09:49.082 NVM Subsystem Reset: Not Supported 00:09:49.082 Command Sets Supported 00:09:49.082 NVM Command Set: Supported 00:09:49.082 Boot Partition: Not Supported 00:09:49.082 Memory Page Size Minimum: 4096 bytes 00:09:49.082 Memory Page Size Maximum: 65536 bytes 00:09:49.082 Persistent Memory Region: Not Supported 00:09:49.082 Optional Asynchronous Events Supported 00:09:49.082 Namespace Attribute Notices: Supported 00:09:49.082 Firmware Activation Notices: Not Supported 00:09:49.082 ANA Change Notices: Not Supported 00:09:49.082 PLE Aggregate Log Change Notices: Not Supported 00:09:49.082 LBA Status Info Alert Notices: Not Supported 00:09:49.083 EGE Aggregate Log Change Notices: Not Supported 00:09:49.083 Normal NVM Subsystem Shutdown event: Not Supported 00:09:49.083 Zone Descriptor Change Notices: Not Supported 00:09:49.083 Discovery Log Change Notices: Not Supported 00:09:49.083 Controller Attributes 00:09:49.083 128-bit Host Identifier: Not Supported 00:09:49.083 Non-Operational Permissive Mode: Not Supported 00:09:49.083 NVM Sets: Not Supported 00:09:49.083 Read Recovery Levels: Not Supported 00:09:49.083 Endurance Groups: Not Supported 00:09:49.083 Predictable Latency Mode: Not Supported 00:09:49.083 Traffic Based Keep ALive: Not Supported 00:09:49.083 Namespace Granularity: Not Supported 00:09:49.083 SQ Associations: Not Supported 00:09:49.083 UUID List: Not Supported 00:09:49.083 Multi-Domain Subsystem: Not Supported 00:09:49.083 Fixed Capacity Management: Not Supported 00:09:49.083 Variable Capacity Management: Not Supported 00:09:49.083 Delete Endurance Group: Not Supported 00:09:49.083 Delete NVM Set: Not Supported 00:09:49.083 Extended LBA Formats Supported: Supported 00:09:49.083 Flexible Data Placement Supported: Not Supported 00:09:49.083 00:09:49.083 Controller Memory Buffer Support 00:09:49.083 ================================ 00:09:49.083 Supported: No 00:09:49.083 00:09:49.083 Persistent Memory Region Support 00:09:49.083 ================================ 00:09:49.083 Supported: No 00:09:49.083 00:09:49.083 Admin Command Set Attributes 00:09:49.083 ============================ 00:09:49.083 Security Send/Receive: Not Supported 00:09:49.083 Format NVM: Supported 00:09:49.083 Firmware Activate/Download: Not Supported 00:09:49.083 Namespace Management: Supported 00:09:49.083 Device Self-Test: Not Supported 00:09:49.083 Directives: Supported 00:09:49.083 NVMe-MI: Not Supported 00:09:49.083 Virtualization Management: Not Supported 00:09:49.083 Doorbell Buffer Config: Supported 00:09:49.083 Get LBA Status Capability: Not Supported 00:09:49.083 Command & Feature Lockdown Capability: Not Supported 00:09:49.083 Abort Command Limit: 4 00:09:49.083 Async Event Request Limit: 4 00:09:49.083 Number of Firmware Slots: N/A 00:09:49.083 Firmware Slot 1 Read-Only: N/A 00:09:49.083 Firmware Activation Without Reset: N/A 00:09:49.083 Multiple Update Detection Support: N/A 00:09:49.083 Firmware Update Granularity: No Information Provided 00:09:49.083 Per-Namespace SMART Log: Yes 00:09:49.083 Asymmetric Namespace Access Log Page: Not Supported 00:09:49.083 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:49.083 Command Effects Log Page: Supported 00:09:49.083 Get Log Page Extended Data: Supported 00:09:49.083 Telemetry Log Pages: Not Supported 00:09:49.083 Persistent Event Log Pages: Not Supported 00:09:49.083 Supported Log Pages Log Page: May Support 00:09:49.083 Commands Supported & Effects Log Page: Not Supported 00:09:49.083 Feature Identifiers & Effects Log Page:May Support 00:09:49.083 NVMe-MI Commands & Effects Log Page: May Support 00:09:49.083 Data Area 4 for Telemetry Log: Not Supported 00:09:49.083 Error Log Page Entries Supported: 1 00:09:49.083 Keep Alive: Not Supported 00:09:49.083 00:09:49.083 NVM Command Set Attributes 00:09:49.083 ========================== 00:09:49.083 Submission Queue Entry Size 00:09:49.083 Max: 64 00:09:49.083 Min: 64 00:09:49.083 Completion Queue Entry Size 00:09:49.083 Max: 16 00:09:49.083 Min: 16 00:09:49.083 Number of Namespaces: 256 00:09:49.083 Compare Command: Supported 00:09:49.083 Write Uncorrectable Command: Not Supported 00:09:49.083 Dataset Management Command: Supported 00:09:49.083 Write Zeroes Command: Supported 00:09:49.083 Set Features Save Field: Supported 00:09:49.083 Reservations: Not Supported 00:09:49.083 Timestamp: Supported 00:09:49.083 Copy: Supported 00:09:49.083 Volatile Write Cache: Present 00:09:49.083 Atomic Write Unit (Normal): 1 00:09:49.083 Atomic Write Unit (PFail): 1 00:09:49.083 Atomic Compare & Write Unit: 1 00:09:49.083 Fused Compare & Write: Not Supported 00:09:49.083 Scatter-Gather List 00:09:49.083 SGL Command Set: Supported 00:09:49.083 SGL Keyed: Not Supported 00:09:49.083 SGL Bit Bucket Descriptor: Not Supported 00:09:49.083 SGL Metadata Pointer: Not Supported 00:09:49.083 Oversized SGL: Not Supported 00:09:49.083 SGL Metadata Address: Not Supported 00:09:49.083 SGL Offset: Not Supported 00:09:49.083 Transport SGL Data Block: Not Supported 00:09:49.083 Replay Protected Memory Block: Not Supported 00:09:49.083 00:09:49.083 Firmware Slot Information 00:09:49.083 ========================= 00:09:49.083 Active slot: 1 00:09:49.083 Slot 1 Firmware Revision: 1.0 00:09:49.083 00:09:49.083 00:09:49.083 Commands Supported and Effects 00:09:49.083 ============================== 00:09:49.083 Admin Commands 00:09:49.083 -------------- 00:09:49.083 Delete I/O Submission Queue (00h): Supported 00:09:49.083 Create I/O Submission Queue (01h): Supported 00:09:49.083 Get Log Page (02h): Supported 00:09:49.083 Delete I/O Completion Queue (04h): Supported 00:09:49.083 Create I/O Completion Queue (05h): Supported 00:09:49.083 Identify (06h): Supported 00:09:49.083 Abort (08h): Supported 00:09:49.083 Set Features (09h): Supported 00:09:49.083 Get Features (0Ah): Supported 00:09:49.083 Asynchronous Event Request (0Ch): Supported 00:09:49.083 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:49.083 Directive Send (19h): Supported 00:09:49.083 Directive Receive (1Ah): Supported 00:09:49.083 Virtualization Management (1Ch): Supported 00:09:49.083 Doorbell Buffer Config (7Ch): Supported 00:09:49.083 Format NVM (80h): Supported LBA-Change 00:09:49.083 I/O Commands 00:09:49.083 ------------ 00:09:49.083 Flush (00h): Supported LBA-Change 00:09:49.083 Write (01h): Supported LBA-Change 00:09:49.083 Read (02h): Supported 00:09:49.083 Compare (05h): Supported 00:09:49.083 Write Zeroes (08h): Supported LBA-Change 00:09:49.083 Dataset Management (09h): Supported LBA-Change 00:09:49.083 Unknown (0Ch): Supported 00:09:49.083 Unknown (12h): Supported 00:09:49.083 Copy (19h): Supported LBA-Change 00:09:49.083 Unknown (1Dh): Supported LBA-Change 00:09:49.083 00:09:49.083 Error Log 00:09:49.083 ========= 00:09:49.083 00:09:49.083 Arbitration 00:09:49.083 =========== 00:09:49.083 Arbitration Burst: no limit 00:09:49.083 00:09:49.083 Power Management 00:09:49.083 ================ 00:09:49.083 Number of Power States: 1 00:09:49.083 Current Power State: Power State #0 00:09:49.083 Power State #0: 00:09:49.083 Max Power: 25.00 W 00:09:49.083 Non-Operational State: Operational 00:09:49.083 Entry Latency: 16 microseconds 00:09:49.083 Exit Latency: 4 microseconds 00:09:49.083 Relative Read Throughput: 0 00:09:49.083 Relative Read Latency: 0 00:09:49.083 Relative Write Throughput: 0 00:09:49.083 Relative Write Latency: 0 00:09:49.083 Idle Power: Not Reported 00:09:49.083 Active Power: Not Reported 00:09:49.083 Non-Operational Permissive Mode: Not Supported 00:09:49.083 00:09:49.083 Health Information 00:09:49.083 ================== 00:09:49.083 Critical Warnings: 00:09:49.083 Available Spare Space: OK 00:09:49.083 Temperature: OK 00:09:49.083 Device Reliability: OK 00:09:49.083 Read Only: No 00:09:49.083 Volatile Memory Backup: OK 00:09:49.083 Current Temperature: 323 Kelvin (50 Celsius) 00:09:49.083 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:49.083 Available Spare: 0% 00:09:49.083 Available Spare Threshold: 0% 00:09:49.083 Life Percentage Used: 0% 00:09:49.083 Data Units Read: 1196 00:09:49.083 Data Units Written: 551 00:09:49.083 Host Read Commands: 56924 00:09:49.083 Host Write Commands: 27921 00:09:49.083 Controller Busy Time: 0 minutes 00:09:49.083 Power Cycles: 0 00:09:49.083 Power On Hours: 0 hours 00:09:49.083 Unsafe Shutdowns: 0 00:09:49.083 Unrecoverable Media Errors: 0 00:09:49.083 Lifetime Error Log Entries: 0 00:09:49.083 Warning Temperature Time: 0 minutes 00:09:49.083 Critical Temperature Time: 0 minutes 00:09:49.083 00:09:49.083 Number of Queues 00:09:49.083 ================ 00:09:49.083 Number of I/O Submission Queues: 64 00:09:49.083 Number of I/O Completion Queues: 64 00:09:49.083 00:09:49.083 ZNS Specific Controller Data 00:09:49.083 ============================ 00:09:49.083 Zone Append Size Limit: 0 00:09:49.083 00:09:49.083 00:09:49.083 Active Namespaces 00:09:49.083 ================= 00:09:49.083 Namespace ID:1 00:09:49.083 Error Recovery Timeout: Unlimited 00:09:49.083 Command Set Identifier: [2024-11-09 16:19:08.643404] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:09.0] process 63601 terminated unexpected 00:09:49.083 NVM (00h) 00:09:49.083 Deallocate: Supported 00:09:49.083 Deallocated/Unwritten Error: Supported 00:09:49.083 Deallocated Read Value: All 0x00 00:09:49.083 Deallocate in Write Zeroes: Not Supported 00:09:49.083 Deallocated Guard Field: 0xFFFF 00:09:49.084 Flush: Supported 00:09:49.084 Reservation: Not Supported 00:09:49.084 Namespace Sharing Capabilities: Private 00:09:49.084 Size (in LBAs): 1310720 (5GiB) 00:09:49.084 Capacity (in LBAs): 1310720 (5GiB) 00:09:49.084 Utilization (in LBAs): 1310720 (5GiB) 00:09:49.084 Thin Provisioning: Not Supported 00:09:49.084 Per-NS Atomic Units: No 00:09:49.084 Maximum Single Source Range Length: 128 00:09:49.084 Maximum Copy Length: 128 00:09:49.084 Maximum Source Range Count: 128 00:09:49.084 NGUID/EUI64 Never Reused: No 00:09:49.084 Namespace Write Protected: No 00:09:49.084 Number of LBA Formats: 8 00:09:49.084 Current LBA Format: LBA Format #04 00:09:49.084 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.084 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.084 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.084 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.084 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.084 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.084 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.084 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.084 00:09:49.084 ===================================================== 00:09:49.084 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:49.084 ===================================================== 00:09:49.084 Controller Capabilities/Features 00:09:49.084 ================================ 00:09:49.084 Vendor ID: 1b36 00:09:49.084 Subsystem Vendor ID: 1af4 00:09:49.084 Serial Number: 12343 00:09:49.084 Model Number: QEMU NVMe Ctrl 00:09:49.084 Firmware Version: 8.0.0 00:09:49.084 Recommended Arb Burst: 6 00:09:49.084 IEEE OUI Identifier: 00 54 52 00:09:49.084 Multi-path I/O 00:09:49.084 May have multiple subsystem ports: No 00:09:49.084 May have multiple controllers: Yes 00:09:49.084 Associated with SR-IOV VF: No 00:09:49.084 Max Data Transfer Size: 524288 00:09:49.084 Max Number of Namespaces: 256 00:09:49.084 Max Number of I/O Queues: 64 00:09:49.084 NVMe Specification Version (VS): 1.4 00:09:49.084 NVMe Specification Version (Identify): 1.4 00:09:49.084 Maximum Queue Entries: 2048 00:09:49.084 Contiguous Queues Required: Yes 00:09:49.084 Arbitration Mechanisms Supported 00:09:49.084 Weighted Round Robin: Not Supported 00:09:49.084 Vendor Specific: Not Supported 00:09:49.084 Reset Timeout: 7500 ms 00:09:49.084 Doorbell Stride: 4 bytes 00:09:49.084 NVM Subsystem Reset: Not Supported 00:09:49.084 Command Sets Supported 00:09:49.084 NVM Command Set: Supported 00:09:49.084 Boot Partition: Not Supported 00:09:49.084 Memory Page Size Minimum: 4096 bytes 00:09:49.084 Memory Page Size Maximum: 65536 bytes 00:09:49.084 Persistent Memory Region: Not Supported 00:09:49.084 Optional Asynchronous Events Supported 00:09:49.084 Namespace Attribute Notices: Supported 00:09:49.084 Firmware Activation Notices: Not Supported 00:09:49.084 ANA Change Notices: Not Supported 00:09:49.084 PLE Aggregate Log Change Notices: Not Supported 00:09:49.084 LBA Status Info Alert Notices: Not Supported 00:09:49.084 EGE Aggregate Log Change Notices: Not Supported 00:09:49.084 Normal NVM Subsystem Shutdown event: Not Supported 00:09:49.084 Zone Descriptor Change Notices: Not Supported 00:09:49.084 Discovery Log Change Notices: Not Supported 00:09:49.084 Controller Attributes 00:09:49.084 128-bit Host Identifier: Not Supported 00:09:49.084 Non-Operational Permissive Mode: Not Supported 00:09:49.084 NVM Sets: Not Supported 00:09:49.084 Read Recovery Levels: Not Supported 00:09:49.084 Endurance Groups: Supported 00:09:49.084 Predictable Latency Mode: Not Supported 00:09:49.084 Traffic Based Keep ALive: Not Supported 00:09:49.084 Namespace Granularity: Not Supported 00:09:49.084 SQ Associations: Not Supported 00:09:49.084 UUID List: Not Supported 00:09:49.084 Multi-Domain Subsystem: Not Supported 00:09:49.084 Fixed Capacity Management: Not Supported 00:09:49.084 Variable Capacity Management: Not Supported 00:09:49.084 Delete Endurance Group: Not Supported 00:09:49.084 Delete NVM Set: Not Supported 00:09:49.084 Extended LBA Formats Supported: Supported 00:09:49.084 Flexible Data Placement Supported: Supported 00:09:49.084 00:09:49.084 Controller Memory Buffer Support 00:09:49.084 ================================ 00:09:49.084 Supported: No 00:09:49.084 00:09:49.084 Persistent Memory Region Support 00:09:49.084 ================================ 00:09:49.084 Supported: No 00:09:49.084 00:09:49.084 Admin Command Set Attributes 00:09:49.084 ============================ 00:09:49.084 Security Send/Receive: Not Supported 00:09:49.084 Format NVM: Supported 00:09:49.084 Firmware Activate/Download: Not Supported 00:09:49.084 Namespace Management: Supported 00:09:49.084 Device Self-Test: Not Supported 00:09:49.084 Directives: Supported 00:09:49.084 NVMe-MI: Not Supported 00:09:49.084 Virtualization Management: Not Supported 00:09:49.084 Doorbell Buffer Config: Supported 00:09:49.084 Get LBA Status Capability: Not Supported 00:09:49.084 Command & Feature Lockdown Capability: Not Supported 00:09:49.084 Abort Command Limit: 4 00:09:49.084 Async Event Request Limit: 4 00:09:49.084 Number of Firmware Slots: N/A 00:09:49.084 Firmware Slot 1 Read-Only: N/A 00:09:49.084 Firmware Activation Without Reset: N/A 00:09:49.084 Multiple Update Detection Support: N/A 00:09:49.084 Firmware Update Granularity: No Information Provided 00:09:49.084 Per-Namespace SMART Log: Yes 00:09:49.084 Asymmetric Namespace Access Log Page: Not Supported 00:09:49.084 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:49.084 Command Effects Log Page: Supported 00:09:49.084 Get Log Page Extended Data: Supported 00:09:49.084 Telemetry Log Pages: Not Supported 00:09:49.084 Persistent Event Log Pages: Not Supported 00:09:49.084 Supported Log Pages Log Page: May Support 00:09:49.084 Commands Supported & Effects Log Page: Not Supported 00:09:49.084 Feature Identifiers & Effects Log Page:May Support 00:09:49.084 NVMe-MI Commands & Effects Log Page: May Support 00:09:49.084 Data Area 4 for Telemetry Log: Not Supported 00:09:49.084 Error Log Page Entries Supported: 1 00:09:49.084 Keep Alive: Not Supported 00:09:49.084 00:09:49.084 NVM Command Set Attributes 00:09:49.084 ========================== 00:09:49.084 Submission Queue Entry Size 00:09:49.084 Max: 64 00:09:49.084 Min: 64 00:09:49.084 Completion Queue Entry Size 00:09:49.084 Max: 16 00:09:49.084 Min: 16 00:09:49.084 Number of Namespaces: 256 00:09:49.084 Compare Command: Supported 00:09:49.084 Write Uncorrectable Command: Not Supported 00:09:49.084 Dataset Management Command: Supported 00:09:49.084 Write Zeroes Command: Supported 00:09:49.084 Set Features Save Field: Supported 00:09:49.084 Reservations: Not Supported 00:09:49.084 Timestamp: Supported 00:09:49.084 Copy: Supported 00:09:49.084 Volatile Write Cache: Present 00:09:49.084 Atomic Write Unit (Normal): 1 00:09:49.084 Atomic Write Unit (PFail): 1 00:09:49.084 Atomic Compare & Write Unit: 1 00:09:49.084 Fused Compare & Write: Not Supported 00:09:49.084 Scatter-Gather List 00:09:49.084 SGL Command Set: Supported 00:09:49.084 SGL Keyed: Not Supported 00:09:49.084 SGL Bit Bucket Descriptor: Not Supported 00:09:49.084 SGL Metadata Pointer: Not Supported 00:09:49.084 Oversized SGL: Not Supported 00:09:49.084 SGL Metadata Address: Not Supported 00:09:49.084 SGL Offset: Not Supported 00:09:49.084 Transport SGL Data Block: Not Supported 00:09:49.084 Replay Protected Memory Block: Not Supported 00:09:49.084 00:09:49.084 Firmware Slot Information 00:09:49.084 ========================= 00:09:49.084 Active slot: 1 00:09:49.084 Slot 1 Firmware Revision: 1.0 00:09:49.084 00:09:49.084 00:09:49.084 Commands Supported and Effects 00:09:49.084 ============================== 00:09:49.084 Admin Commands 00:09:49.084 -------------- 00:09:49.084 Delete I/O Submission Queue (00h): Supported 00:09:49.084 Create I/O Submission Queue (01h): Supported 00:09:49.084 Get Log Page (02h): Supported 00:09:49.084 Delete I/O Completion Queue (04h): Supported 00:09:49.084 Create I/O Completion Queue (05h): Supported 00:09:49.084 Identify (06h): Supported 00:09:49.084 Abort (08h): Supported 00:09:49.084 Set Features (09h): Supported 00:09:49.084 Get Features (0Ah): Supported 00:09:49.084 Asynchronous Event Request (0Ch): Supported 00:09:49.084 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:49.084 Directive Send (19h): Supported 00:09:49.084 Directive Receive (1Ah): Supported 00:09:49.084 Virtualization Management (1Ch): Supported 00:09:49.084 Doorbell Buffer Config (7Ch): Supported 00:09:49.084 Format NVM (80h): Supported LBA-Change 00:09:49.084 I/O Commands 00:09:49.084 ------------ 00:09:49.084 Flush (00h): Supported LBA-Change 00:09:49.084 Write (01h): Supported LBA-Change 00:09:49.084 Read (02h): Supported 00:09:49.084 Compare (05h): Supported 00:09:49.084 Write Zeroes (08h): Supported LBA-Change 00:09:49.085 Dataset Management (09h): Supported LBA-Change 00:09:49.085 Unknown (0Ch): Supported 00:09:49.085 Unknown (12h): Supported 00:09:49.085 Copy (19h): Supported LBA-Change 00:09:49.085 Unknown (1Dh): Supported LBA-Change 00:09:49.085 00:09:49.085 Error Log 00:09:49.085 ========= 00:09:49.085 00:09:49.085 Arbitration 00:09:49.085 =========== 00:09:49.085 Arbitration Burst: no limit 00:09:49.085 00:09:49.085 Power Management 00:09:49.085 ================ 00:09:49.085 Number of Power States: 1 00:09:49.085 Current Power State: Power State #0 00:09:49.085 Power State #0: 00:09:49.085 Max Power: 25.00 W 00:09:49.085 Non-Operational State: Operational 00:09:49.085 Entry Latency: 16 microseconds 00:09:49.085 Exit Latency: 4 microseconds 00:09:49.085 Relative Read Throughput: 0 00:09:49.085 Relative Read Latency: 0 00:09:49.085 Relative Write Throughput: 0 00:09:49.085 Relative Write Latency: 0 00:09:49.085 Idle Power: Not Reported 00:09:49.085 Active Power: Not Reported 00:09:49.085 Non-Operational Permissive Mode: Not Supported 00:09:49.085 00:09:49.085 Health Information 00:09:49.085 ================== 00:09:49.085 Critical Warnings: 00:09:49.085 Available Spare Space: OK 00:09:49.085 Temperature: OK 00:09:49.085 Device Reliability: OK 00:09:49.085 Read Only: No 00:09:49.085 Volatile Memory Backup: OK 00:09:49.085 Current Temperature: 323 Kelvin (50 Celsius) 00:09:49.085 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:49.085 Available Spare: 0% 00:09:49.085 Available Spare Threshold: 0% 00:09:49.085 Life Percentage Used: 0% 00:09:49.085 Data Units Read: 1411 00:09:49.085 Data Units Written: 653 00:09:49.085 Host Read Commands: 58762 00:09:49.085 Host Write Commands: 28802 00:09:49.085 Controller Busy Time: 0 minutes 00:09:49.085 Power Cycles: 0 00:09:49.085 Power On Hours: 0 hours 00:09:49.085 Unsafe Shutdowns: 0 00:09:49.085 Unrecoverable Media Errors: 0 00:09:49.085 Lifetime Error Log Entries: 0 00:09:49.085 Warning Temperature Time: 0 minutes 00:09:49.085 Critical Temperature Time: 0 minutes 00:09:49.085 00:09:49.085 Number of Queues 00:09:49.085 ================ 00:09:49.085 Number of I/O Submission Queues: 64 00:09:49.085 Number of I/O Completion Queues: 64 00:09:49.085 00:09:49.085 ZNS Specific Controller Data 00:09:49.085 ============================ 00:09:49.085 Zone Append Size Limit: 0 00:09:49.085 00:09:49.085 00:09:49.085 Active Namespaces 00:09:49.085 ================= 00:09:49.085 Namespace ID:1 00:09:49.085 Error Recovery Timeout: Unlimited 00:09:49.085 Command Set Identifier: NVM (00h) 00:09:49.085 Deallocate: Supported 00:09:49.085 Deallocated/Unwritten Error: Supported 00:09:49.085 Deallocated Read Value: All 0x00 00:09:49.085 Deallocate in Write Zeroes: Not Supported 00:09:49.085 Deallocated Guard Field: 0xFFFF 00:09:49.085 Flush: Supported 00:09:49.085 Reservation: Not Supported 00:09:49.085 Namespace Sharing Capabilities: Multiple Controllers 00:09:49.085 Size (in LBAs): 262144 (1GiB) 00:09:49.085 Capacity (in LBAs): 262144 (1GiB) 00:09:49.085 Utilization (in LBAs): 262144 (1GiB) 00:09:49.085 Thin Provisioning: Not Supported 00:09:49.085 Per-NS Atomic Units: No 00:09:49.085 Maximum Single Source Range Length: 128 00:09:49.085 Maximum Copy Length: 128 00:09:49.085 Maximum Source Range Count: 128 00:09:49.085 NGUID/EUI64 Never Reused: No 00:09:49.085 Namespace Write Protected: No 00:09:49.085 Endurance group ID: 1 00:09:49.085 Number of LBA Formats: 8 00:09:49.085 Current LBA Format: LBA Format #04 00:09:49.085 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.085 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.085 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.085 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.085 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.085 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.085 LBA Format #06: Data Si[2024-11-09 16:19:08.644900] nvme_ctrlr.c:3472:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:08.0] process 63601 terminated unexpected 00:09:49.085 ze: 4096 Metadata Size: 16 00:09:49.085 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.085 00:09:49.085 Get Feature FDP: 00:09:49.085 ================ 00:09:49.085 Enabled: Yes 00:09:49.085 FDP configuration index: 0 00:09:49.085 00:09:49.085 FDP configurations log page 00:09:49.085 =========================== 00:09:49.085 Number of FDP configurations: 1 00:09:49.085 Version: 0 00:09:49.085 Size: 112 00:09:49.085 FDP Configuration Descriptor: 0 00:09:49.085 Descriptor Size: 96 00:09:49.085 Reclaim Group Identifier format: 2 00:09:49.085 FDP Volatile Write Cache: Not Present 00:09:49.085 FDP Configuration: Valid 00:09:49.085 Vendor Specific Size: 0 00:09:49.085 Number of Reclaim Groups: 2 00:09:49.085 Number of Recalim Unit Handles: 8 00:09:49.085 Max Placement Identifiers: 128 00:09:49.085 Number of Namespaces Suppprted: 256 00:09:49.085 Reclaim unit Nominal Size: 6000000 bytes 00:09:49.085 Estimated Reclaim Unit Time Limit: Not Reported 00:09:49.085 RUH Desc #000: RUH Type: Initially Isolated 00:09:49.085 RUH Desc #001: RUH Type: Initially Isolated 00:09:49.085 RUH Desc #002: RUH Type: Initially Isolated 00:09:49.085 RUH Desc #003: RUH Type: Initially Isolated 00:09:49.085 RUH Desc #004: RUH Type: Initially Isolated 00:09:49.085 RUH Desc #005: RUH Type: Initially Isolated 00:09:49.085 RUH Desc #006: RUH Type: Initially Isolated 00:09:49.085 RUH Desc #007: RUH Type: Initially Isolated 00:09:49.085 00:09:49.085 FDP reclaim unit handle usage log page 00:09:49.085 ====================================== 00:09:49.085 Number of Reclaim Unit Handles: 8 00:09:49.085 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:49.085 RUH Usage Desc #001: RUH Attributes: Unused 00:09:49.085 RUH Usage Desc #002: RUH Attributes: Unused 00:09:49.085 RUH Usage Desc #003: RUH Attributes: Unused 00:09:49.085 RUH Usage Desc #004: RUH Attributes: Unused 00:09:49.085 RUH Usage Desc #005: RUH Attributes: Unused 00:09:49.085 RUH Usage Desc #006: RUH Attributes: Unused 00:09:49.085 RUH Usage Desc #007: RUH Attributes: Unused 00:09:49.085 00:09:49.085 FDP statistics log page 00:09:49.085 ======================= 00:09:49.085 Host bytes with metadata written: 434241536 00:09:49.085 Media bytes with metadata written: 434348032 00:09:49.085 Media bytes erased: 0 00:09:49.085 00:09:49.085 FDP events log page 00:09:49.085 =================== 00:09:49.085 Number of FDP events: 0 00:09:49.085 00:09:49.085 ===================================================== 00:09:49.085 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:49.085 ===================================================== 00:09:49.085 Controller Capabilities/Features 00:09:49.085 ================================ 00:09:49.085 Vendor ID: 1b36 00:09:49.085 Subsystem Vendor ID: 1af4 00:09:49.085 Serial Number: 12342 00:09:49.085 Model Number: QEMU NVMe Ctrl 00:09:49.085 Firmware Version: 8.0.0 00:09:49.085 Recommended Arb Burst: 6 00:09:49.085 IEEE OUI Identifier: 00 54 52 00:09:49.085 Multi-path I/O 00:09:49.085 May have multiple subsystem ports: No 00:09:49.085 May have multiple controllers: No 00:09:49.085 Associated with SR-IOV VF: No 00:09:49.085 Max Data Transfer Size: 524288 00:09:49.085 Max Number of Namespaces: 256 00:09:49.085 Max Number of I/O Queues: 64 00:09:49.085 NVMe Specification Version (VS): 1.4 00:09:49.085 NVMe Specification Version (Identify): 1.4 00:09:49.085 Maximum Queue Entries: 2048 00:09:49.085 Contiguous Queues Required: Yes 00:09:49.085 Arbitration Mechanisms Supported 00:09:49.085 Weighted Round Robin: Not Supported 00:09:49.085 Vendor Specific: Not Supported 00:09:49.086 Reset Timeout: 7500 ms 00:09:49.086 Doorbell Stride: 4 bytes 00:09:49.086 NVM Subsystem Reset: Not Supported 00:09:49.086 Command Sets Supported 00:09:49.086 NVM Command Set: Supported 00:09:49.086 Boot Partition: Not Supported 00:09:49.086 Memory Page Size Minimum: 4096 bytes 00:09:49.086 Memory Page Size Maximum: 65536 bytes 00:09:49.086 Persistent Memory Region: Not Supported 00:09:49.086 Optional Asynchronous Events Supported 00:09:49.086 Namespace Attribute Notices: Supported 00:09:49.086 Firmware Activation Notices: Not Supported 00:09:49.086 ANA Change Notices: Not Supported 00:09:49.086 PLE Aggregate Log Change Notices: Not Supported 00:09:49.086 LBA Status Info Alert Notices: Not Supported 00:09:49.086 EGE Aggregate Log Change Notices: Not Supported 00:09:49.086 Normal NVM Subsystem Shutdown event: Not Supported 00:09:49.086 Zone Descriptor Change Notices: Not Supported 00:09:49.086 Discovery Log Change Notices: Not Supported 00:09:49.086 Controller Attributes 00:09:49.086 128-bit Host Identifier: Not Supported 00:09:49.086 Non-Operational Permissive Mode: Not Supported 00:09:49.086 NVM Sets: Not Supported 00:09:49.086 Read Recovery Levels: Not Supported 00:09:49.086 Endurance Groups: Not Supported 00:09:49.086 Predictable Latency Mode: Not Supported 00:09:49.086 Traffic Based Keep ALive: Not Supported 00:09:49.086 Namespace Granularity: Not Supported 00:09:49.086 SQ Associations: Not Supported 00:09:49.086 UUID List: Not Supported 00:09:49.086 Multi-Domain Subsystem: Not Supported 00:09:49.086 Fixed Capacity Management: Not Supported 00:09:49.086 Variable Capacity Management: Not Supported 00:09:49.086 Delete Endurance Group: Not Supported 00:09:49.086 Delete NVM Set: Not Supported 00:09:49.086 Extended LBA Formats Supported: Supported 00:09:49.086 Flexible Data Placement Supported: Not Supported 00:09:49.086 00:09:49.086 Controller Memory Buffer Support 00:09:49.086 ================================ 00:09:49.086 Supported: No 00:09:49.086 00:09:49.086 Persistent Memory Region Support 00:09:49.086 ================================ 00:09:49.086 Supported: No 00:09:49.086 00:09:49.086 Admin Command Set Attributes 00:09:49.086 ============================ 00:09:49.086 Security Send/Receive: Not Supported 00:09:49.086 Format NVM: Supported 00:09:49.086 Firmware Activate/Download: Not Supported 00:09:49.086 Namespace Management: Supported 00:09:49.086 Device Self-Test: Not Supported 00:09:49.086 Directives: Supported 00:09:49.086 NVMe-MI: Not Supported 00:09:49.086 Virtualization Management: Not Supported 00:09:49.086 Doorbell Buffer Config: Supported 00:09:49.086 Get LBA Status Capability: Not Supported 00:09:49.086 Command & Feature Lockdown Capability: Not Supported 00:09:49.086 Abort Command Limit: 4 00:09:49.086 Async Event Request Limit: 4 00:09:49.086 Number of Firmware Slots: N/A 00:09:49.086 Firmware Slot 1 Read-Only: N/A 00:09:49.086 Firmware Activation Without Reset: N/A 00:09:49.086 Multiple Update Detection Support: N/A 00:09:49.086 Firmware Update Granularity: No Information Provided 00:09:49.086 Per-Namespace SMART Log: Yes 00:09:49.086 Asymmetric Namespace Access Log Page: Not Supported 00:09:49.086 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:49.086 Command Effects Log Page: Supported 00:09:49.086 Get Log Page Extended Data: Supported 00:09:49.086 Telemetry Log Pages: Not Supported 00:09:49.086 Persistent Event Log Pages: Not Supported 00:09:49.086 Supported Log Pages Log Page: May Support 00:09:49.086 Commands Supported & Effects Log Page: Not Supported 00:09:49.086 Feature Identifiers & Effects Log Page:May Support 00:09:49.086 NVMe-MI Commands & Effects Log Page: May Support 00:09:49.086 Data Area 4 for Telemetry Log: Not Supported 00:09:49.086 Error Log Page Entries Supported: 1 00:09:49.086 Keep Alive: Not Supported 00:09:49.086 00:09:49.086 NVM Command Set Attributes 00:09:49.086 ========================== 00:09:49.086 Submission Queue Entry Size 00:09:49.086 Max: 64 00:09:49.086 Min: 64 00:09:49.086 Completion Queue Entry Size 00:09:49.086 Max: 16 00:09:49.086 Min: 16 00:09:49.086 Number of Namespaces: 256 00:09:49.086 Compare Command: Supported 00:09:49.086 Write Uncorrectable Command: Not Supported 00:09:49.086 Dataset Management Command: Supported 00:09:49.086 Write Zeroes Command: Supported 00:09:49.086 Set Features Save Field: Supported 00:09:49.086 Reservations: Not Supported 00:09:49.086 Timestamp: Supported 00:09:49.086 Copy: Supported 00:09:49.086 Volatile Write Cache: Present 00:09:49.086 Atomic Write Unit (Normal): 1 00:09:49.086 Atomic Write Unit (PFail): 1 00:09:49.086 Atomic Compare & Write Unit: 1 00:09:49.086 Fused Compare & Write: Not Supported 00:09:49.086 Scatter-Gather List 00:09:49.086 SGL Command Set: Supported 00:09:49.086 SGL Keyed: Not Supported 00:09:49.086 SGL Bit Bucket Descriptor: Not Supported 00:09:49.086 SGL Metadata Pointer: Not Supported 00:09:49.086 Oversized SGL: Not Supported 00:09:49.086 SGL Metadata Address: Not Supported 00:09:49.086 SGL Offset: Not Supported 00:09:49.086 Transport SGL Data Block: Not Supported 00:09:49.086 Replay Protected Memory Block: Not Supported 00:09:49.086 00:09:49.086 Firmware Slot Information 00:09:49.086 ========================= 00:09:49.086 Active slot: 1 00:09:49.086 Slot 1 Firmware Revision: 1.0 00:09:49.086 00:09:49.086 00:09:49.086 Commands Supported and Effects 00:09:49.086 ============================== 00:09:49.086 Admin Commands 00:09:49.086 -------------- 00:09:49.086 Delete I/O Submission Queue (00h): Supported 00:09:49.086 Create I/O Submission Queue (01h): Supported 00:09:49.086 Get Log Page (02h): Supported 00:09:49.086 Delete I/O Completion Queue (04h): Supported 00:09:49.086 Create I/O Completion Queue (05h): Supported 00:09:49.086 Identify (06h): Supported 00:09:49.086 Abort (08h): Supported 00:09:49.086 Set Features (09h): Supported 00:09:49.086 Get Features (0Ah): Supported 00:09:49.086 Asynchronous Event Request (0Ch): Supported 00:09:49.086 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:49.086 Directive Send (19h): Supported 00:09:49.086 Directive Receive (1Ah): Supported 00:09:49.086 Virtualization Management (1Ch): Supported 00:09:49.086 Doorbell Buffer Config (7Ch): Supported 00:09:49.086 Format NVM (80h): Supported LBA-Change 00:09:49.086 I/O Commands 00:09:49.086 ------------ 00:09:49.086 Flush (00h): Supported LBA-Change 00:09:49.086 Write (01h): Supported LBA-Change 00:09:49.086 Read (02h): Supported 00:09:49.086 Compare (05h): Supported 00:09:49.086 Write Zeroes (08h): Supported LBA-Change 00:09:49.086 Dataset Management (09h): Supported LBA-Change 00:09:49.086 Unknown (0Ch): Supported 00:09:49.086 Unknown (12h): Supported 00:09:49.086 Copy (19h): Supported LBA-Change 00:09:49.087 Unknown (1Dh): Supported LBA-Change 00:09:49.087 00:09:49.087 Error Log 00:09:49.087 ========= 00:09:49.087 00:09:49.087 Arbitration 00:09:49.087 =========== 00:09:49.087 Arbitration Burst: no limit 00:09:49.087 00:09:49.087 Power Management 00:09:49.087 ================ 00:09:49.087 Number of Power States: 1 00:09:49.087 Current Power State: Power State #0 00:09:49.087 Power State #0: 00:09:49.087 Max Power: 25.00 W 00:09:49.087 Non-Operational State: Operational 00:09:49.087 Entry Latency: 16 microseconds 00:09:49.087 Exit Latency: 4 microseconds 00:09:49.087 Relative Read Throughput: 0 00:09:49.087 Relative Read Latency: 0 00:09:49.087 Relative Write Throughput: 0 00:09:49.087 Relative Write Latency: 0 00:09:49.087 Idle Power: Not Reported 00:09:49.087 Active Power: Not Reported 00:09:49.087 Non-Operational Permissive Mode: Not Supported 00:09:49.087 00:09:49.087 Health Information 00:09:49.087 ================== 00:09:49.087 Critical Warnings: 00:09:49.087 Available Spare Space: OK 00:09:49.087 Temperature: OK 00:09:49.087 Device Reliability: OK 00:09:49.087 Read Only: No 00:09:49.087 Volatile Memory Backup: OK 00:09:49.087 Current Temperature: 323 Kelvin (50 Celsius) 00:09:49.087 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:49.087 Available Spare: 0% 00:09:49.087 Available Spare Threshold: 0% 00:09:49.087 Life Percentage Used: 0% 00:09:49.087 Data Units Read: 3807 00:09:49.087 Data Units Written: 1754 00:09:49.087 Host Read Commands: 173100 00:09:49.087 Host Write Commands: 84770 00:09:49.087 Controller Busy Time: 0 minutes 00:09:49.087 Power Cycles: 0 00:09:49.087 Power On Hours: 0 hours 00:09:49.087 Unsafe Shutdowns: 0 00:09:49.087 Unrecoverable Media Errors: 0 00:09:49.087 Lifetime Error Log Entries: 0 00:09:49.087 Warning Temperature Time: 0 minutes 00:09:49.087 Critical Temperature Time: 0 minutes 00:09:49.087 00:09:49.087 Number of Queues 00:09:49.087 ================ 00:09:49.087 Number of I/O Submission Queues: 64 00:09:49.087 Number of I/O Completion Queues: 64 00:09:49.087 00:09:49.087 ZNS Specific Controller Data 00:09:49.087 ============================ 00:09:49.087 Zone Append Size Limit: 0 00:09:49.087 00:09:49.087 00:09:49.087 Active Namespaces 00:09:49.087 ================= 00:09:49.087 Namespace ID:1 00:09:49.087 Error Recovery Timeout: Unlimited 00:09:49.087 Command Set Identifier: NVM (00h) 00:09:49.087 Deallocate: Supported 00:09:49.087 Deallocated/Unwritten Error: Supported 00:09:49.087 Deallocated Read Value: All 0x00 00:09:49.087 Deallocate in Write Zeroes: Not Supported 00:09:49.087 Deallocated Guard Field: 0xFFFF 00:09:49.087 Flush: Supported 00:09:49.087 Reservation: Not Supported 00:09:49.087 Namespace Sharing Capabilities: Private 00:09:49.087 Size (in LBAs): 1048576 (4GiB) 00:09:49.087 Capacity (in LBAs): 1048576 (4GiB) 00:09:49.087 Utilization (in LBAs): 1048576 (4GiB) 00:09:49.087 Thin Provisioning: Not Supported 00:09:49.087 Per-NS Atomic Units: No 00:09:49.087 Maximum Single Source Range Length: 128 00:09:49.087 Maximum Copy Length: 128 00:09:49.087 Maximum Source Range Count: 128 00:09:49.087 NGUID/EUI64 Never Reused: No 00:09:49.087 Namespace Write Protected: No 00:09:49.087 Number of LBA Formats: 8 00:09:49.087 Current LBA Format: LBA Format #04 00:09:49.087 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.087 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.087 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.087 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.087 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.087 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.087 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.087 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.087 00:09:49.087 Namespace ID:2 00:09:49.087 Error Recovery Timeout: Unlimited 00:09:49.087 Command Set Identifier: NVM (00h) 00:09:49.087 Deallocate: Supported 00:09:49.087 Deallocated/Unwritten Error: Supported 00:09:49.087 Deallocated Read Value: All 0x00 00:09:49.087 Deallocate in Write Zeroes: Not Supported 00:09:49.087 Deallocated Guard Field: 0xFFFF 00:09:49.087 Flush: Supported 00:09:49.087 Reservation: Not Supported 00:09:49.087 Namespace Sharing Capabilities: Private 00:09:49.087 Size (in LBAs): 1048576 (4GiB) 00:09:49.087 Capacity (in LBAs): 1048576 (4GiB) 00:09:49.087 Utilization (in LBAs): 1048576 (4GiB) 00:09:49.087 Thin Provisioning: Not Supported 00:09:49.087 Per-NS Atomic Units: No 00:09:49.087 Maximum Single Source Range Length: 128 00:09:49.087 Maximum Copy Length: 128 00:09:49.087 Maximum Source Range Count: 128 00:09:49.087 NGUID/EUI64 Never Reused: No 00:09:49.087 Namespace Write Protected: No 00:09:49.087 Number of LBA Formats: 8 00:09:49.087 Current LBA Format: LBA Format #04 00:09:49.087 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.087 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.087 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.087 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.087 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.087 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.087 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.087 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.087 00:09:49.087 Namespace ID:3 00:09:49.087 Error Recovery Timeout: Unlimited 00:09:49.087 Command Set Identifier: NVM (00h) 00:09:49.087 Deallocate: Supported 00:09:49.087 Deallocated/Unwritten Error: Supported 00:09:49.087 Deallocated Read Value: All 0x00 00:09:49.087 Deallocate in Write Zeroes: Not Supported 00:09:49.087 Deallocated Guard Field: 0xFFFF 00:09:49.087 Flush: Supported 00:09:49.087 Reservation: Not Supported 00:09:49.087 Namespace Sharing Capabilities: Private 00:09:49.087 Size (in LBAs): 1048576 (4GiB) 00:09:49.087 Capacity (in LBAs): 1048576 (4GiB) 00:09:49.087 Utilization (in LBAs): 1048576 (4GiB) 00:09:49.087 Thin Provisioning: Not Supported 00:09:49.087 Per-NS Atomic Units: No 00:09:49.087 Maximum Single Source Range Length: 128 00:09:49.087 Maximum Copy Length: 128 00:09:49.087 Maximum Source Range Count: 128 00:09:49.087 NGUID/EUI64 Never Reused: No 00:09:49.087 Namespace Write Protected: No 00:09:49.087 Number of LBA Formats: 8 00:09:49.087 Current LBA Format: LBA Format #04 00:09:49.087 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.087 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.087 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.087 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.087 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.087 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.087 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.087 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.087 00:09:49.087 16:19:08 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:49.087 16:19:08 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:09:49.347 ===================================================== 00:09:49.347 NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:49.347 ===================================================== 00:09:49.347 Controller Capabilities/Features 00:09:49.347 ================================ 00:09:49.347 Vendor ID: 1b36 00:09:49.347 Subsystem Vendor ID: 1af4 00:09:49.347 Serial Number: 12340 00:09:49.347 Model Number: QEMU NVMe Ctrl 00:09:49.347 Firmware Version: 8.0.0 00:09:49.347 Recommended Arb Burst: 6 00:09:49.347 IEEE OUI Identifier: 00 54 52 00:09:49.347 Multi-path I/O 00:09:49.347 May have multiple subsystem ports: No 00:09:49.347 May have multiple controllers: No 00:09:49.347 Associated with SR-IOV VF: No 00:09:49.347 Max Data Transfer Size: 524288 00:09:49.347 Max Number of Namespaces: 256 00:09:49.347 Max Number of I/O Queues: 64 00:09:49.347 NVMe Specification Version (VS): 1.4 00:09:49.347 NVMe Specification Version (Identify): 1.4 00:09:49.347 Maximum Queue Entries: 2048 00:09:49.347 Contiguous Queues Required: Yes 00:09:49.347 Arbitration Mechanisms Supported 00:09:49.347 Weighted Round Robin: Not Supported 00:09:49.347 Vendor Specific: Not Supported 00:09:49.347 Reset Timeout: 7500 ms 00:09:49.347 Doorbell Stride: 4 bytes 00:09:49.347 NVM Subsystem Reset: Not Supported 00:09:49.348 Command Sets Supported 00:09:49.348 NVM Command Set: Supported 00:09:49.348 Boot Partition: Not Supported 00:09:49.348 Memory Page Size Minimum: 4096 bytes 00:09:49.348 Memory Page Size Maximum: 65536 bytes 00:09:49.348 Persistent Memory Region: Not Supported 00:09:49.348 Optional Asynchronous Events Supported 00:09:49.348 Namespace Attribute Notices: Supported 00:09:49.348 Firmware Activation Notices: Not Supported 00:09:49.348 ANA Change Notices: Not Supported 00:09:49.348 PLE Aggregate Log Change Notices: Not Supported 00:09:49.348 LBA Status Info Alert Notices: Not Supported 00:09:49.348 EGE Aggregate Log Change Notices: Not Supported 00:09:49.348 Normal NVM Subsystem Shutdown event: Not Supported 00:09:49.348 Zone Descriptor Change Notices: Not Supported 00:09:49.348 Discovery Log Change Notices: Not Supported 00:09:49.348 Controller Attributes 00:09:49.348 128-bit Host Identifier: Not Supported 00:09:49.348 Non-Operational Permissive Mode: Not Supported 00:09:49.348 NVM Sets: Not Supported 00:09:49.348 Read Recovery Levels: Not Supported 00:09:49.348 Endurance Groups: Not Supported 00:09:49.348 Predictable Latency Mode: Not Supported 00:09:49.348 Traffic Based Keep ALive: Not Supported 00:09:49.348 Namespace Granularity: Not Supported 00:09:49.348 SQ Associations: Not Supported 00:09:49.348 UUID List: Not Supported 00:09:49.348 Multi-Domain Subsystem: Not Supported 00:09:49.348 Fixed Capacity Management: Not Supported 00:09:49.348 Variable Capacity Management: Not Supported 00:09:49.348 Delete Endurance Group: Not Supported 00:09:49.348 Delete NVM Set: Not Supported 00:09:49.348 Extended LBA Formats Supported: Supported 00:09:49.348 Flexible Data Placement Supported: Not Supported 00:09:49.348 00:09:49.348 Controller Memory Buffer Support 00:09:49.348 ================================ 00:09:49.348 Supported: No 00:09:49.348 00:09:49.348 Persistent Memory Region Support 00:09:49.348 ================================ 00:09:49.348 Supported: No 00:09:49.348 00:09:49.348 Admin Command Set Attributes 00:09:49.348 ============================ 00:09:49.348 Security Send/Receive: Not Supported 00:09:49.348 Format NVM: Supported 00:09:49.348 Firmware Activate/Download: Not Supported 00:09:49.348 Namespace Management: Supported 00:09:49.348 Device Self-Test: Not Supported 00:09:49.348 Directives: Supported 00:09:49.348 NVMe-MI: Not Supported 00:09:49.348 Virtualization Management: Not Supported 00:09:49.348 Doorbell Buffer Config: Supported 00:09:49.348 Get LBA Status Capability: Not Supported 00:09:49.348 Command & Feature Lockdown Capability: Not Supported 00:09:49.348 Abort Command Limit: 4 00:09:49.348 Async Event Request Limit: 4 00:09:49.348 Number of Firmware Slots: N/A 00:09:49.348 Firmware Slot 1 Read-Only: N/A 00:09:49.348 Firmware Activation Without Reset: N/A 00:09:49.348 Multiple Update Detection Support: N/A 00:09:49.348 Firmware Update Granularity: No Information Provided 00:09:49.348 Per-Namespace SMART Log: Yes 00:09:49.348 Asymmetric Namespace Access Log Page: Not Supported 00:09:49.348 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:49.348 Command Effects Log Page: Supported 00:09:49.348 Get Log Page Extended Data: Supported 00:09:49.348 Telemetry Log Pages: Not Supported 00:09:49.348 Persistent Event Log Pages: Not Supported 00:09:49.348 Supported Log Pages Log Page: May Support 00:09:49.348 Commands Supported & Effects Log Page: Not Supported 00:09:49.348 Feature Identifiers & Effects Log Page:May Support 00:09:49.348 NVMe-MI Commands & Effects Log Page: May Support 00:09:49.348 Data Area 4 for Telemetry Log: Not Supported 00:09:49.348 Error Log Page Entries Supported: 1 00:09:49.348 Keep Alive: Not Supported 00:09:49.348 00:09:49.348 NVM Command Set Attributes 00:09:49.348 ========================== 00:09:49.348 Submission Queue Entry Size 00:09:49.348 Max: 64 00:09:49.348 Min: 64 00:09:49.348 Completion Queue Entry Size 00:09:49.348 Max: 16 00:09:49.348 Min: 16 00:09:49.348 Number of Namespaces: 256 00:09:49.348 Compare Command: Supported 00:09:49.348 Write Uncorrectable Command: Not Supported 00:09:49.348 Dataset Management Command: Supported 00:09:49.348 Write Zeroes Command: Supported 00:09:49.348 Set Features Save Field: Supported 00:09:49.348 Reservations: Not Supported 00:09:49.348 Timestamp: Supported 00:09:49.348 Copy: Supported 00:09:49.348 Volatile Write Cache: Present 00:09:49.348 Atomic Write Unit (Normal): 1 00:09:49.348 Atomic Write Unit (PFail): 1 00:09:49.348 Atomic Compare & Write Unit: 1 00:09:49.348 Fused Compare & Write: Not Supported 00:09:49.348 Scatter-Gather List 00:09:49.348 SGL Command Set: Supported 00:09:49.348 SGL Keyed: Not Supported 00:09:49.348 SGL Bit Bucket Descriptor: Not Supported 00:09:49.348 SGL Metadata Pointer: Not Supported 00:09:49.348 Oversized SGL: Not Supported 00:09:49.348 SGL Metadata Address: Not Supported 00:09:49.348 SGL Offset: Not Supported 00:09:49.348 Transport SGL Data Block: Not Supported 00:09:49.348 Replay Protected Memory Block: Not Supported 00:09:49.348 00:09:49.348 Firmware Slot Information 00:09:49.348 ========================= 00:09:49.348 Active slot: 1 00:09:49.348 Slot 1 Firmware Revision: 1.0 00:09:49.348 00:09:49.348 00:09:49.348 Commands Supported and Effects 00:09:49.348 ============================== 00:09:49.348 Admin Commands 00:09:49.348 -------------- 00:09:49.348 Delete I/O Submission Queue (00h): Supported 00:09:49.348 Create I/O Submission Queue (01h): Supported 00:09:49.348 Get Log Page (02h): Supported 00:09:49.348 Delete I/O Completion Queue (04h): Supported 00:09:49.348 Create I/O Completion Queue (05h): Supported 00:09:49.348 Identify (06h): Supported 00:09:49.348 Abort (08h): Supported 00:09:49.348 Set Features (09h): Supported 00:09:49.348 Get Features (0Ah): Supported 00:09:49.348 Asynchronous Event Request (0Ch): Supported 00:09:49.348 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:49.348 Directive Send (19h): Supported 00:09:49.348 Directive Receive (1Ah): Supported 00:09:49.348 Virtualization Management (1Ch): Supported 00:09:49.348 Doorbell Buffer Config (7Ch): Supported 00:09:49.348 Format NVM (80h): Supported LBA-Change 00:09:49.348 I/O Commands 00:09:49.348 ------------ 00:09:49.348 Flush (00h): Supported LBA-Change 00:09:49.348 Write (01h): Supported LBA-Change 00:09:49.348 Read (02h): Supported 00:09:49.348 Compare (05h): Supported 00:09:49.348 Write Zeroes (08h): Supported LBA-Change 00:09:49.348 Dataset Management (09h): Supported LBA-Change 00:09:49.348 Unknown (0Ch): Supported 00:09:49.348 Unknown (12h): Supported 00:09:49.348 Copy (19h): Supported LBA-Change 00:09:49.348 Unknown (1Dh): Supported LBA-Change 00:09:49.348 00:09:49.348 Error Log 00:09:49.348 ========= 00:09:49.348 00:09:49.348 Arbitration 00:09:49.348 =========== 00:09:49.348 Arbitration Burst: no limit 00:09:49.348 00:09:49.348 Power Management 00:09:49.348 ================ 00:09:49.348 Number of Power States: 1 00:09:49.348 Current Power State: Power State #0 00:09:49.348 Power State #0: 00:09:49.348 Max Power: 25.00 W 00:09:49.348 Non-Operational State: Operational 00:09:49.348 Entry Latency: 16 microseconds 00:09:49.348 Exit Latency: 4 microseconds 00:09:49.348 Relative Read Throughput: 0 00:09:49.348 Relative Read Latency: 0 00:09:49.348 Relative Write Throughput: 0 00:09:49.348 Relative Write Latency: 0 00:09:49.348 Idle Power: Not Reported 00:09:49.348 Active Power: Not Reported 00:09:49.348 Non-Operational Permissive Mode: Not Supported 00:09:49.348 00:09:49.348 Health Information 00:09:49.348 ================== 00:09:49.348 Critical Warnings: 00:09:49.348 Available Spare Space: OK 00:09:49.348 Temperature: OK 00:09:49.348 Device Reliability: OK 00:09:49.348 Read Only: No 00:09:49.348 Volatile Memory Backup: OK 00:09:49.348 Current Temperature: 323 Kelvin (50 Celsius) 00:09:49.348 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:49.348 Available Spare: 0% 00:09:49.348 Available Spare Threshold: 0% 00:09:49.348 Life Percentage Used: 0% 00:09:49.348 Data Units Read: 1791 00:09:49.348 Data Units Written: 824 00:09:49.348 Host Read Commands: 86471 00:09:49.348 Host Write Commands: 42934 00:09:49.348 Controller Busy Time: 0 minutes 00:09:49.348 Power Cycles: 0 00:09:49.348 Power On Hours: 0 hours 00:09:49.348 Unsafe Shutdowns: 0 00:09:49.348 Unrecoverable Media Errors: 0 00:09:49.348 Lifetime Error Log Entries: 0 00:09:49.348 Warning Temperature Time: 0 minutes 00:09:49.348 Critical Temperature Time: 0 minutes 00:09:49.348 00:09:49.348 Number of Queues 00:09:49.348 ================ 00:09:49.349 Number of I/O Submission Queues: 64 00:09:49.349 Number of I/O Completion Queues: 64 00:09:49.349 00:09:49.349 ZNS Specific Controller Data 00:09:49.349 ============================ 00:09:49.349 Zone Append Size Limit: 0 00:09:49.349 00:09:49.349 00:09:49.349 Active Namespaces 00:09:49.349 ================= 00:09:49.349 Namespace ID:1 00:09:49.349 Error Recovery Timeout: Unlimited 00:09:49.349 Command Set Identifier: NVM (00h) 00:09:49.349 Deallocate: Supported 00:09:49.349 Deallocated/Unwritten Error: Supported 00:09:49.349 Deallocated Read Value: All 0x00 00:09:49.349 Deallocate in Write Zeroes: Not Supported 00:09:49.349 Deallocated Guard Field: 0xFFFF 00:09:49.349 Flush: Supported 00:09:49.349 Reservation: Not Supported 00:09:49.349 Metadata Transferred as: Separate Metadata Buffer 00:09:49.349 Namespace Sharing Capabilities: Private 00:09:49.349 Size (in LBAs): 1548666 (5GiB) 00:09:49.349 Capacity (in LBAs): 1548666 (5GiB) 00:09:49.349 Utilization (in LBAs): 1548666 (5GiB) 00:09:49.349 Thin Provisioning: Not Supported 00:09:49.349 Per-NS Atomic Units: No 00:09:49.349 Maximum Single Source Range Length: 128 00:09:49.349 Maximum Copy Length: 128 00:09:49.349 Maximum Source Range Count: 128 00:09:49.349 NGUID/EUI64 Never Reused: No 00:09:49.349 Namespace Write Protected: No 00:09:49.349 Number of LBA Formats: 8 00:09:49.349 Current LBA Format: LBA Format #07 00:09:49.349 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.349 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.349 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.349 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.349 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.349 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.349 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.349 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.349 00:09:49.349 16:19:08 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:49.349 16:19:08 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' -i 0 00:09:49.349 ===================================================== 00:09:49.349 NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:49.349 ===================================================== 00:09:49.349 Controller Capabilities/Features 00:09:49.349 ================================ 00:09:49.349 Vendor ID: 1b36 00:09:49.349 Subsystem Vendor ID: 1af4 00:09:49.349 Serial Number: 12341 00:09:49.349 Model Number: QEMU NVMe Ctrl 00:09:49.349 Firmware Version: 8.0.0 00:09:49.349 Recommended Arb Burst: 6 00:09:49.349 IEEE OUI Identifier: 00 54 52 00:09:49.349 Multi-path I/O 00:09:49.349 May have multiple subsystem ports: No 00:09:49.349 May have multiple controllers: No 00:09:49.349 Associated with SR-IOV VF: No 00:09:49.349 Max Data Transfer Size: 524288 00:09:49.349 Max Number of Namespaces: 256 00:09:49.349 Max Number of I/O Queues: 64 00:09:49.349 NVMe Specification Version (VS): 1.4 00:09:49.349 NVMe Specification Version (Identify): 1.4 00:09:49.349 Maximum Queue Entries: 2048 00:09:49.349 Contiguous Queues Required: Yes 00:09:49.349 Arbitration Mechanisms Supported 00:09:49.349 Weighted Round Robin: Not Supported 00:09:49.349 Vendor Specific: Not Supported 00:09:49.349 Reset Timeout: 7500 ms 00:09:49.349 Doorbell Stride: 4 bytes 00:09:49.349 NVM Subsystem Reset: Not Supported 00:09:49.349 Command Sets Supported 00:09:49.349 NVM Command Set: Supported 00:09:49.349 Boot Partition: Not Supported 00:09:49.349 Memory Page Size Minimum: 4096 bytes 00:09:49.349 Memory Page Size Maximum: 65536 bytes 00:09:49.349 Persistent Memory Region: Not Supported 00:09:49.349 Optional Asynchronous Events Supported 00:09:49.349 Namespace Attribute Notices: Supported 00:09:49.349 Firmware Activation Notices: Not Supported 00:09:49.349 ANA Change Notices: Not Supported 00:09:49.349 PLE Aggregate Log Change Notices: Not Supported 00:09:49.349 LBA Status Info Alert Notices: Not Supported 00:09:49.349 EGE Aggregate Log Change Notices: Not Supported 00:09:49.349 Normal NVM Subsystem Shutdown event: Not Supported 00:09:49.349 Zone Descriptor Change Notices: Not Supported 00:09:49.349 Discovery Log Change Notices: Not Supported 00:09:49.349 Controller Attributes 00:09:49.349 128-bit Host Identifier: Not Supported 00:09:49.349 Non-Operational Permissive Mode: Not Supported 00:09:49.349 NVM Sets: Not Supported 00:09:49.349 Read Recovery Levels: Not Supported 00:09:49.349 Endurance Groups: Not Supported 00:09:49.349 Predictable Latency Mode: Not Supported 00:09:49.349 Traffic Based Keep ALive: Not Supported 00:09:49.349 Namespace Granularity: Not Supported 00:09:49.349 SQ Associations: Not Supported 00:09:49.349 UUID List: Not Supported 00:09:49.349 Multi-Domain Subsystem: Not Supported 00:09:49.349 Fixed Capacity Management: Not Supported 00:09:49.349 Variable Capacity Management: Not Supported 00:09:49.349 Delete Endurance Group: Not Supported 00:09:49.349 Delete NVM Set: Not Supported 00:09:49.349 Extended LBA Formats Supported: Supported 00:09:49.349 Flexible Data Placement Supported: Not Supported 00:09:49.349 00:09:49.349 Controller Memory Buffer Support 00:09:49.349 ================================ 00:09:49.349 Supported: No 00:09:49.349 00:09:49.349 Persistent Memory Region Support 00:09:49.349 ================================ 00:09:49.349 Supported: No 00:09:49.349 00:09:49.349 Admin Command Set Attributes 00:09:49.349 ============================ 00:09:49.349 Security Send/Receive: Not Supported 00:09:49.349 Format NVM: Supported 00:09:49.349 Firmware Activate/Download: Not Supported 00:09:49.349 Namespace Management: Supported 00:09:49.349 Device Self-Test: Not Supported 00:09:49.349 Directives: Supported 00:09:49.349 NVMe-MI: Not Supported 00:09:49.349 Virtualization Management: Not Supported 00:09:49.349 Doorbell Buffer Config: Supported 00:09:49.349 Get LBA Status Capability: Not Supported 00:09:49.349 Command & Feature Lockdown Capability: Not Supported 00:09:49.349 Abort Command Limit: 4 00:09:49.349 Async Event Request Limit: 4 00:09:49.349 Number of Firmware Slots: N/A 00:09:49.349 Firmware Slot 1 Read-Only: N/A 00:09:49.349 Firmware Activation Without Reset: N/A 00:09:49.349 Multiple Update Detection Support: N/A 00:09:49.349 Firmware Update Granularity: No Information Provided 00:09:49.349 Per-Namespace SMART Log: Yes 00:09:49.349 Asymmetric Namespace Access Log Page: Not Supported 00:09:49.349 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:49.349 Command Effects Log Page: Supported 00:09:49.349 Get Log Page Extended Data: Supported 00:09:49.349 Telemetry Log Pages: Not Supported 00:09:49.349 Persistent Event Log Pages: Not Supported 00:09:49.349 Supported Log Pages Log Page: May Support 00:09:49.349 Commands Supported & Effects Log Page: Not Supported 00:09:49.349 Feature Identifiers & Effects Log Page:May Support 00:09:49.349 NVMe-MI Commands & Effects Log Page: May Support 00:09:49.349 Data Area 4 for Telemetry Log: Not Supported 00:09:49.349 Error Log Page Entries Supported: 1 00:09:49.349 Keep Alive: Not Supported 00:09:49.349 00:09:49.349 NVM Command Set Attributes 00:09:49.349 ========================== 00:09:49.349 Submission Queue Entry Size 00:09:49.349 Max: 64 00:09:49.349 Min: 64 00:09:49.349 Completion Queue Entry Size 00:09:49.349 Max: 16 00:09:49.349 Min: 16 00:09:49.349 Number of Namespaces: 256 00:09:49.349 Compare Command: Supported 00:09:49.349 Write Uncorrectable Command: Not Supported 00:09:49.349 Dataset Management Command: Supported 00:09:49.349 Write Zeroes Command: Supported 00:09:49.349 Set Features Save Field: Supported 00:09:49.349 Reservations: Not Supported 00:09:49.349 Timestamp: Supported 00:09:49.349 Copy: Supported 00:09:49.349 Volatile Write Cache: Present 00:09:49.349 Atomic Write Unit (Normal): 1 00:09:49.349 Atomic Write Unit (PFail): 1 00:09:49.349 Atomic Compare & Write Unit: 1 00:09:49.349 Fused Compare & Write: Not Supported 00:09:49.349 Scatter-Gather List 00:09:49.349 SGL Command Set: Supported 00:09:49.349 SGL Keyed: Not Supported 00:09:49.349 SGL Bit Bucket Descriptor: Not Supported 00:09:49.349 SGL Metadata Pointer: Not Supported 00:09:49.349 Oversized SGL: Not Supported 00:09:49.349 SGL Metadata Address: Not Supported 00:09:49.349 SGL Offset: Not Supported 00:09:49.349 Transport SGL Data Block: Not Supported 00:09:49.349 Replay Protected Memory Block: Not Supported 00:09:49.349 00:09:49.349 Firmware Slot Information 00:09:49.349 ========================= 00:09:49.349 Active slot: 1 00:09:49.349 Slot 1 Firmware Revision: 1.0 00:09:49.349 00:09:49.349 00:09:49.349 Commands Supported and Effects 00:09:49.349 ============================== 00:09:49.349 Admin Commands 00:09:49.349 -------------- 00:09:49.349 Delete I/O Submission Queue (00h): Supported 00:09:49.349 Create I/O Submission Queue (01h): Supported 00:09:49.349 Get Log Page (02h): Supported 00:09:49.349 Delete I/O Completion Queue (04h): Supported 00:09:49.349 Create I/O Completion Queue (05h): Supported 00:09:49.350 Identify (06h): Supported 00:09:49.350 Abort (08h): Supported 00:09:49.350 Set Features (09h): Supported 00:09:49.350 Get Features (0Ah): Supported 00:09:49.350 Asynchronous Event Request (0Ch): Supported 00:09:49.350 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:49.350 Directive Send (19h): Supported 00:09:49.350 Directive Receive (1Ah): Supported 00:09:49.350 Virtualization Management (1Ch): Supported 00:09:49.350 Doorbell Buffer Config (7Ch): Supported 00:09:49.350 Format NVM (80h): Supported LBA-Change 00:09:49.350 I/O Commands 00:09:49.350 ------------ 00:09:49.350 Flush (00h): Supported LBA-Change 00:09:49.350 Write (01h): Supported LBA-Change 00:09:49.350 Read (02h): Supported 00:09:49.350 Compare (05h): Supported 00:09:49.350 Write Zeroes (08h): Supported LBA-Change 00:09:49.350 Dataset Management (09h): Supported LBA-Change 00:09:49.350 Unknown (0Ch): Supported 00:09:49.350 Unknown (12h): Supported 00:09:49.350 Copy (19h): Supported LBA-Change 00:09:49.350 Unknown (1Dh): Supported LBA-Change 00:09:49.350 00:09:49.350 Error Log 00:09:49.350 ========= 00:09:49.350 00:09:49.350 Arbitration 00:09:49.350 =========== 00:09:49.350 Arbitration Burst: no limit 00:09:49.350 00:09:49.350 Power Management 00:09:49.350 ================ 00:09:49.350 Number of Power States: 1 00:09:49.350 Current Power State: Power State #0 00:09:49.350 Power State #0: 00:09:49.350 Max Power: 25.00 W 00:09:49.350 Non-Operational State: Operational 00:09:49.350 Entry Latency: 16 microseconds 00:09:49.350 Exit Latency: 4 microseconds 00:09:49.350 Relative Read Throughput: 0 00:09:49.350 Relative Read Latency: 0 00:09:49.350 Relative Write Throughput: 0 00:09:49.350 Relative Write Latency: 0 00:09:49.350 Idle Power: Not Reported 00:09:49.350 Active Power: Not Reported 00:09:49.350 Non-Operational Permissive Mode: Not Supported 00:09:49.350 00:09:49.350 Health Information 00:09:49.350 ================== 00:09:49.350 Critical Warnings: 00:09:49.350 Available Spare Space: OK 00:09:49.350 Temperature: OK 00:09:49.350 Device Reliability: OK 00:09:49.350 Read Only: No 00:09:49.350 Volatile Memory Backup: OK 00:09:49.350 Current Temperature: 323 Kelvin (50 Celsius) 00:09:49.350 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:49.350 Available Spare: 0% 00:09:49.350 Available Spare Threshold: 0% 00:09:49.350 Life Percentage Used: 0% 00:09:49.350 Data Units Read: 1196 00:09:49.350 Data Units Written: 551 00:09:49.350 Host Read Commands: 56924 00:09:49.350 Host Write Commands: 27921 00:09:49.350 Controller Busy Time: 0 minutes 00:09:49.350 Power Cycles: 0 00:09:49.350 Power On Hours: 0 hours 00:09:49.350 Unsafe Shutdowns: 0 00:09:49.350 Unrecoverable Media Errors: 0 00:09:49.350 Lifetime Error Log Entries: 0 00:09:49.350 Warning Temperature Time: 0 minutes 00:09:49.350 Critical Temperature Time: 0 minutes 00:09:49.350 00:09:49.350 Number of Queues 00:09:49.350 ================ 00:09:49.350 Number of I/O Submission Queues: 64 00:09:49.350 Number of I/O Completion Queues: 64 00:09:49.350 00:09:49.350 ZNS Specific Controller Data 00:09:49.350 ============================ 00:09:49.350 Zone Append Size Limit: 0 00:09:49.350 00:09:49.350 00:09:49.350 Active Namespaces 00:09:49.350 ================= 00:09:49.350 Namespace ID:1 00:09:49.350 Error Recovery Timeout: Unlimited 00:09:49.350 Command Set Identifier: NVM (00h) 00:09:49.350 Deallocate: Supported 00:09:49.350 Deallocated/Unwritten Error: Supported 00:09:49.350 Deallocated Read Value: All 0x00 00:09:49.350 Deallocate in Write Zeroes: Not Supported 00:09:49.350 Deallocated Guard Field: 0xFFFF 00:09:49.350 Flush: Supported 00:09:49.350 Reservation: Not Supported 00:09:49.350 Namespace Sharing Capabilities: Private 00:09:49.350 Size (in LBAs): 1310720 (5GiB) 00:09:49.350 Capacity (in LBAs): 1310720 (5GiB) 00:09:49.350 Utilization (in LBAs): 1310720 (5GiB) 00:09:49.350 Thin Provisioning: Not Supported 00:09:49.350 Per-NS Atomic Units: No 00:09:49.350 Maximum Single Source Range Length: 128 00:09:49.350 Maximum Copy Length: 128 00:09:49.350 Maximum Source Range Count: 128 00:09:49.350 NGUID/EUI64 Never Reused: No 00:09:49.350 Namespace Write Protected: No 00:09:49.350 Number of LBA Formats: 8 00:09:49.350 Current LBA Format: LBA Format #04 00:09:49.350 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.350 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.350 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.350 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.350 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.350 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.350 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.350 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.350 00:09:49.350 16:19:09 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:49.350 16:19:09 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' -i 0 00:09:49.611 ===================================================== 00:09:49.611 NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:49.611 ===================================================== 00:09:49.611 Controller Capabilities/Features 00:09:49.611 ================================ 00:09:49.611 Vendor ID: 1b36 00:09:49.611 Subsystem Vendor ID: 1af4 00:09:49.611 Serial Number: 12342 00:09:49.611 Model Number: QEMU NVMe Ctrl 00:09:49.611 Firmware Version: 8.0.0 00:09:49.611 Recommended Arb Burst: 6 00:09:49.611 IEEE OUI Identifier: 00 54 52 00:09:49.611 Multi-path I/O 00:09:49.611 May have multiple subsystem ports: No 00:09:49.611 May have multiple controllers: No 00:09:49.611 Associated with SR-IOV VF: No 00:09:49.611 Max Data Transfer Size: 524288 00:09:49.611 Max Number of Namespaces: 256 00:09:49.611 Max Number of I/O Queues: 64 00:09:49.611 NVMe Specification Version (VS): 1.4 00:09:49.611 NVMe Specification Version (Identify): 1.4 00:09:49.611 Maximum Queue Entries: 2048 00:09:49.611 Contiguous Queues Required: Yes 00:09:49.611 Arbitration Mechanisms Supported 00:09:49.611 Weighted Round Robin: Not Supported 00:09:49.611 Vendor Specific: Not Supported 00:09:49.611 Reset Timeout: 7500 ms 00:09:49.611 Doorbell Stride: 4 bytes 00:09:49.611 NVM Subsystem Reset: Not Supported 00:09:49.611 Command Sets Supported 00:09:49.611 NVM Command Set: Supported 00:09:49.611 Boot Partition: Not Supported 00:09:49.611 Memory Page Size Minimum: 4096 bytes 00:09:49.611 Memory Page Size Maximum: 65536 bytes 00:09:49.611 Persistent Memory Region: Not Supported 00:09:49.611 Optional Asynchronous Events Supported 00:09:49.611 Namespace Attribute Notices: Supported 00:09:49.611 Firmware Activation Notices: Not Supported 00:09:49.611 ANA Change Notices: Not Supported 00:09:49.611 PLE Aggregate Log Change Notices: Not Supported 00:09:49.611 LBA Status Info Alert Notices: Not Supported 00:09:49.611 EGE Aggregate Log Change Notices: Not Supported 00:09:49.611 Normal NVM Subsystem Shutdown event: Not Supported 00:09:49.611 Zone Descriptor Change Notices: Not Supported 00:09:49.611 Discovery Log Change Notices: Not Supported 00:09:49.611 Controller Attributes 00:09:49.611 128-bit Host Identifier: Not Supported 00:09:49.611 Non-Operational Permissive Mode: Not Supported 00:09:49.611 NVM Sets: Not Supported 00:09:49.611 Read Recovery Levels: Not Supported 00:09:49.611 Endurance Groups: Not Supported 00:09:49.611 Predictable Latency Mode: Not Supported 00:09:49.611 Traffic Based Keep ALive: Not Supported 00:09:49.611 Namespace Granularity: Not Supported 00:09:49.611 SQ Associations: Not Supported 00:09:49.611 UUID List: Not Supported 00:09:49.611 Multi-Domain Subsystem: Not Supported 00:09:49.611 Fixed Capacity Management: Not Supported 00:09:49.611 Variable Capacity Management: Not Supported 00:09:49.611 Delete Endurance Group: Not Supported 00:09:49.611 Delete NVM Set: Not Supported 00:09:49.611 Extended LBA Formats Supported: Supported 00:09:49.611 Flexible Data Placement Supported: Not Supported 00:09:49.611 00:09:49.611 Controller Memory Buffer Support 00:09:49.611 ================================ 00:09:49.611 Supported: No 00:09:49.611 00:09:49.611 Persistent Memory Region Support 00:09:49.611 ================================ 00:09:49.611 Supported: No 00:09:49.611 00:09:49.611 Admin Command Set Attributes 00:09:49.611 ============================ 00:09:49.611 Security Send/Receive: Not Supported 00:09:49.611 Format NVM: Supported 00:09:49.611 Firmware Activate/Download: Not Supported 00:09:49.611 Namespace Management: Supported 00:09:49.611 Device Self-Test: Not Supported 00:09:49.611 Directives: Supported 00:09:49.611 NVMe-MI: Not Supported 00:09:49.611 Virtualization Management: Not Supported 00:09:49.611 Doorbell Buffer Config: Supported 00:09:49.611 Get LBA Status Capability: Not Supported 00:09:49.611 Command & Feature Lockdown Capability: Not Supported 00:09:49.611 Abort Command Limit: 4 00:09:49.611 Async Event Request Limit: 4 00:09:49.611 Number of Firmware Slots: N/A 00:09:49.611 Firmware Slot 1 Read-Only: N/A 00:09:49.611 Firmware Activation Without Reset: N/A 00:09:49.611 Multiple Update Detection Support: N/A 00:09:49.611 Firmware Update Granularity: No Information Provided 00:09:49.611 Per-Namespace SMART Log: Yes 00:09:49.611 Asymmetric Namespace Access Log Page: Not Supported 00:09:49.611 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:49.611 Command Effects Log Page: Supported 00:09:49.611 Get Log Page Extended Data: Supported 00:09:49.611 Telemetry Log Pages: Not Supported 00:09:49.612 Persistent Event Log Pages: Not Supported 00:09:49.612 Supported Log Pages Log Page: May Support 00:09:49.612 Commands Supported & Effects Log Page: Not Supported 00:09:49.612 Feature Identifiers & Effects Log Page:May Support 00:09:49.612 NVMe-MI Commands & Effects Log Page: May Support 00:09:49.612 Data Area 4 for Telemetry Log: Not Supported 00:09:49.612 Error Log Page Entries Supported: 1 00:09:49.612 Keep Alive: Not Supported 00:09:49.612 00:09:49.612 NVM Command Set Attributes 00:09:49.612 ========================== 00:09:49.612 Submission Queue Entry Size 00:09:49.612 Max: 64 00:09:49.612 Min: 64 00:09:49.612 Completion Queue Entry Size 00:09:49.612 Max: 16 00:09:49.612 Min: 16 00:09:49.612 Number of Namespaces: 256 00:09:49.612 Compare Command: Supported 00:09:49.612 Write Uncorrectable Command: Not Supported 00:09:49.612 Dataset Management Command: Supported 00:09:49.612 Write Zeroes Command: Supported 00:09:49.612 Set Features Save Field: Supported 00:09:49.612 Reservations: Not Supported 00:09:49.612 Timestamp: Supported 00:09:49.612 Copy: Supported 00:09:49.612 Volatile Write Cache: Present 00:09:49.612 Atomic Write Unit (Normal): 1 00:09:49.612 Atomic Write Unit (PFail): 1 00:09:49.612 Atomic Compare & Write Unit: 1 00:09:49.612 Fused Compare & Write: Not Supported 00:09:49.612 Scatter-Gather List 00:09:49.612 SGL Command Set: Supported 00:09:49.612 SGL Keyed: Not Supported 00:09:49.612 SGL Bit Bucket Descriptor: Not Supported 00:09:49.612 SGL Metadata Pointer: Not Supported 00:09:49.612 Oversized SGL: Not Supported 00:09:49.612 SGL Metadata Address: Not Supported 00:09:49.612 SGL Offset: Not Supported 00:09:49.612 Transport SGL Data Block: Not Supported 00:09:49.612 Replay Protected Memory Block: Not Supported 00:09:49.612 00:09:49.612 Firmware Slot Information 00:09:49.612 ========================= 00:09:49.612 Active slot: 1 00:09:49.612 Slot 1 Firmware Revision: 1.0 00:09:49.612 00:09:49.612 00:09:49.612 Commands Supported and Effects 00:09:49.612 ============================== 00:09:49.612 Admin Commands 00:09:49.612 -------------- 00:09:49.612 Delete I/O Submission Queue (00h): Supported 00:09:49.612 Create I/O Submission Queue (01h): Supported 00:09:49.612 Get Log Page (02h): Supported 00:09:49.612 Delete I/O Completion Queue (04h): Supported 00:09:49.612 Create I/O Completion Queue (05h): Supported 00:09:49.612 Identify (06h): Supported 00:09:49.612 Abort (08h): Supported 00:09:49.612 Set Features (09h): Supported 00:09:49.612 Get Features (0Ah): Supported 00:09:49.612 Asynchronous Event Request (0Ch): Supported 00:09:49.612 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:49.612 Directive Send (19h): Supported 00:09:49.612 Directive Receive (1Ah): Supported 00:09:49.612 Virtualization Management (1Ch): Supported 00:09:49.612 Doorbell Buffer Config (7Ch): Supported 00:09:49.612 Format NVM (80h): Supported LBA-Change 00:09:49.612 I/O Commands 00:09:49.612 ------------ 00:09:49.612 Flush (00h): Supported LBA-Change 00:09:49.612 Write (01h): Supported LBA-Change 00:09:49.612 Read (02h): Supported 00:09:49.612 Compare (05h): Supported 00:09:49.612 Write Zeroes (08h): Supported LBA-Change 00:09:49.612 Dataset Management (09h): Supported LBA-Change 00:09:49.612 Unknown (0Ch): Supported 00:09:49.612 Unknown (12h): Supported 00:09:49.612 Copy (19h): Supported LBA-Change 00:09:49.612 Unknown (1Dh): Supported LBA-Change 00:09:49.612 00:09:49.612 Error Log 00:09:49.612 ========= 00:09:49.612 00:09:49.612 Arbitration 00:09:49.612 =========== 00:09:49.612 Arbitration Burst: no limit 00:09:49.612 00:09:49.612 Power Management 00:09:49.612 ================ 00:09:49.612 Number of Power States: 1 00:09:49.612 Current Power State: Power State #0 00:09:49.612 Power State #0: 00:09:49.612 Max Power: 25.00 W 00:09:49.612 Non-Operational State: Operational 00:09:49.612 Entry Latency: 16 microseconds 00:09:49.612 Exit Latency: 4 microseconds 00:09:49.612 Relative Read Throughput: 0 00:09:49.612 Relative Read Latency: 0 00:09:49.612 Relative Write Throughput: 0 00:09:49.612 Relative Write Latency: 0 00:09:49.612 Idle Power: Not Reported 00:09:49.612 Active Power: Not Reported 00:09:49.612 Non-Operational Permissive Mode: Not Supported 00:09:49.612 00:09:49.612 Health Information 00:09:49.612 ================== 00:09:49.612 Critical Warnings: 00:09:49.612 Available Spare Space: OK 00:09:49.612 Temperature: OK 00:09:49.612 Device Reliability: OK 00:09:49.612 Read Only: No 00:09:49.612 Volatile Memory Backup: OK 00:09:49.612 Current Temperature: 323 Kelvin (50 Celsius) 00:09:49.612 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:49.612 Available Spare: 0% 00:09:49.612 Available Spare Threshold: 0% 00:09:49.612 Life Percentage Used: 0% 00:09:49.612 Data Units Read: 3807 00:09:49.612 Data Units Written: 1754 00:09:49.612 Host Read Commands: 173100 00:09:49.612 Host Write Commands: 84770 00:09:49.612 Controller Busy Time: 0 minutes 00:09:49.612 Power Cycles: 0 00:09:49.612 Power On Hours: 0 hours 00:09:49.612 Unsafe Shutdowns: 0 00:09:49.612 Unrecoverable Media Errors: 0 00:09:49.612 Lifetime Error Log Entries: 0 00:09:49.612 Warning Temperature Time: 0 minutes 00:09:49.612 Critical Temperature Time: 0 minutes 00:09:49.612 00:09:49.612 Number of Queues 00:09:49.612 ================ 00:09:49.612 Number of I/O Submission Queues: 64 00:09:49.612 Number of I/O Completion Queues: 64 00:09:49.612 00:09:49.612 ZNS Specific Controller Data 00:09:49.612 ============================ 00:09:49.612 Zone Append Size Limit: 0 00:09:49.612 00:09:49.612 00:09:49.612 Active Namespaces 00:09:49.612 ================= 00:09:49.612 Namespace ID:1 00:09:49.612 Error Recovery Timeout: Unlimited 00:09:49.612 Command Set Identifier: NVM (00h) 00:09:49.612 Deallocate: Supported 00:09:49.612 Deallocated/Unwritten Error: Supported 00:09:49.612 Deallocated Read Value: All 0x00 00:09:49.612 Deallocate in Write Zeroes: Not Supported 00:09:49.612 Deallocated Guard Field: 0xFFFF 00:09:49.612 Flush: Supported 00:09:49.612 Reservation: Not Supported 00:09:49.612 Namespace Sharing Capabilities: Private 00:09:49.612 Size (in LBAs): 1048576 (4GiB) 00:09:49.612 Capacity (in LBAs): 1048576 (4GiB) 00:09:49.612 Utilization (in LBAs): 1048576 (4GiB) 00:09:49.612 Thin Provisioning: Not Supported 00:09:49.612 Per-NS Atomic Units: No 00:09:49.612 Maximum Single Source Range Length: 128 00:09:49.612 Maximum Copy Length: 128 00:09:49.612 Maximum Source Range Count: 128 00:09:49.612 NGUID/EUI64 Never Reused: No 00:09:49.612 Namespace Write Protected: No 00:09:49.612 Number of LBA Formats: 8 00:09:49.612 Current LBA Format: LBA Format #04 00:09:49.612 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.612 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.612 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.612 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.612 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.612 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.612 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.612 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.612 00:09:49.612 Namespace ID:2 00:09:49.612 Error Recovery Timeout: Unlimited 00:09:49.612 Command Set Identifier: NVM (00h) 00:09:49.612 Deallocate: Supported 00:09:49.612 Deallocated/Unwritten Error: Supported 00:09:49.612 Deallocated Read Value: All 0x00 00:09:49.612 Deallocate in Write Zeroes: Not Supported 00:09:49.612 Deallocated Guard Field: 0xFFFF 00:09:49.612 Flush: Supported 00:09:49.612 Reservation: Not Supported 00:09:49.612 Namespace Sharing Capabilities: Private 00:09:49.612 Size (in LBAs): 1048576 (4GiB) 00:09:49.612 Capacity (in LBAs): 1048576 (4GiB) 00:09:49.612 Utilization (in LBAs): 1048576 (4GiB) 00:09:49.612 Thin Provisioning: Not Supported 00:09:49.612 Per-NS Atomic Units: No 00:09:49.612 Maximum Single Source Range Length: 128 00:09:49.612 Maximum Copy Length: 128 00:09:49.612 Maximum Source Range Count: 128 00:09:49.612 NGUID/EUI64 Never Reused: No 00:09:49.612 Namespace Write Protected: No 00:09:49.612 Number of LBA Formats: 8 00:09:49.612 Current LBA Format: LBA Format #04 00:09:49.612 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.612 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.612 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.612 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.612 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.612 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.612 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.613 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.613 00:09:49.613 Namespace ID:3 00:09:49.613 Error Recovery Timeout: Unlimited 00:09:49.613 Command Set Identifier: NVM (00h) 00:09:49.613 Deallocate: Supported 00:09:49.613 Deallocated/Unwritten Error: Supported 00:09:49.613 Deallocated Read Value: All 0x00 00:09:49.613 Deallocate in Write Zeroes: Not Supported 00:09:49.613 Deallocated Guard Field: 0xFFFF 00:09:49.613 Flush: Supported 00:09:49.613 Reservation: Not Supported 00:09:49.613 Namespace Sharing Capabilities: Private 00:09:49.613 Size (in LBAs): 1048576 (4GiB) 00:09:49.613 Capacity (in LBAs): 1048576 (4GiB) 00:09:49.613 Utilization (in LBAs): 1048576 (4GiB) 00:09:49.613 Thin Provisioning: Not Supported 00:09:49.613 Per-NS Atomic Units: No 00:09:49.613 Maximum Single Source Range Length: 128 00:09:49.613 Maximum Copy Length: 128 00:09:49.613 Maximum Source Range Count: 128 00:09:49.613 NGUID/EUI64 Never Reused: No 00:09:49.613 Namespace Write Protected: No 00:09:49.613 Number of LBA Formats: 8 00:09:49.613 Current LBA Format: LBA Format #04 00:09:49.613 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.613 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.613 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.613 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.613 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.613 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.613 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.613 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.613 00:09:49.613 16:19:09 -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:49.613 16:19:09 -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' -i 0 00:09:49.874 ===================================================== 00:09:49.874 NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:49.874 ===================================================== 00:09:49.874 Controller Capabilities/Features 00:09:49.874 ================================ 00:09:49.875 Vendor ID: 1b36 00:09:49.875 Subsystem Vendor ID: 1af4 00:09:49.875 Serial Number: 12343 00:09:49.875 Model Number: QEMU NVMe Ctrl 00:09:49.875 Firmware Version: 8.0.0 00:09:49.875 Recommended Arb Burst: 6 00:09:49.875 IEEE OUI Identifier: 00 54 52 00:09:49.875 Multi-path I/O 00:09:49.875 May have multiple subsystem ports: No 00:09:49.875 May have multiple controllers: Yes 00:09:49.875 Associated with SR-IOV VF: No 00:09:49.875 Max Data Transfer Size: 524288 00:09:49.875 Max Number of Namespaces: 256 00:09:49.875 Max Number of I/O Queues: 64 00:09:49.875 NVMe Specification Version (VS): 1.4 00:09:49.875 NVMe Specification Version (Identify): 1.4 00:09:49.875 Maximum Queue Entries: 2048 00:09:49.875 Contiguous Queues Required: Yes 00:09:49.875 Arbitration Mechanisms Supported 00:09:49.875 Weighted Round Robin: Not Supported 00:09:49.875 Vendor Specific: Not Supported 00:09:49.875 Reset Timeout: 7500 ms 00:09:49.875 Doorbell Stride: 4 bytes 00:09:49.875 NVM Subsystem Reset: Not Supported 00:09:49.875 Command Sets Supported 00:09:49.875 NVM Command Set: Supported 00:09:49.875 Boot Partition: Not Supported 00:09:49.875 Memory Page Size Minimum: 4096 bytes 00:09:49.875 Memory Page Size Maximum: 65536 bytes 00:09:49.875 Persistent Memory Region: Not Supported 00:09:49.875 Optional Asynchronous Events Supported 00:09:49.875 Namespace Attribute Notices: Supported 00:09:49.875 Firmware Activation Notices: Not Supported 00:09:49.875 ANA Change Notices: Not Supported 00:09:49.875 PLE Aggregate Log Change Notices: Not Supported 00:09:49.875 LBA Status Info Alert Notices: Not Supported 00:09:49.875 EGE Aggregate Log Change Notices: Not Supported 00:09:49.875 Normal NVM Subsystem Shutdown event: Not Supported 00:09:49.875 Zone Descriptor Change Notices: Not Supported 00:09:49.875 Discovery Log Change Notices: Not Supported 00:09:49.875 Controller Attributes 00:09:49.875 128-bit Host Identifier: Not Supported 00:09:49.875 Non-Operational Permissive Mode: Not Supported 00:09:49.875 NVM Sets: Not Supported 00:09:49.875 Read Recovery Levels: Not Supported 00:09:49.875 Endurance Groups: Supported 00:09:49.875 Predictable Latency Mode: Not Supported 00:09:49.875 Traffic Based Keep ALive: Not Supported 00:09:49.875 Namespace Granularity: Not Supported 00:09:49.875 SQ Associations: Not Supported 00:09:49.875 UUID List: Not Supported 00:09:49.875 Multi-Domain Subsystem: Not Supported 00:09:49.875 Fixed Capacity Management: Not Supported 00:09:49.875 Variable Capacity Management: Not Supported 00:09:49.875 Delete Endurance Group: Not Supported 00:09:49.875 Delete NVM Set: Not Supported 00:09:49.875 Extended LBA Formats Supported: Supported 00:09:49.875 Flexible Data Placement Supported: Supported 00:09:49.875 00:09:49.875 Controller Memory Buffer Support 00:09:49.875 ================================ 00:09:49.875 Supported: No 00:09:49.875 00:09:49.875 Persistent Memory Region Support 00:09:49.875 ================================ 00:09:49.875 Supported: No 00:09:49.875 00:09:49.875 Admin Command Set Attributes 00:09:49.875 ============================ 00:09:49.875 Security Send/Receive: Not Supported 00:09:49.875 Format NVM: Supported 00:09:49.875 Firmware Activate/Download: Not Supported 00:09:49.875 Namespace Management: Supported 00:09:49.875 Device Self-Test: Not Supported 00:09:49.875 Directives: Supported 00:09:49.875 NVMe-MI: Not Supported 00:09:49.875 Virtualization Management: Not Supported 00:09:49.875 Doorbell Buffer Config: Supported 00:09:49.875 Get LBA Status Capability: Not Supported 00:09:49.875 Command & Feature Lockdown Capability: Not Supported 00:09:49.875 Abort Command Limit: 4 00:09:49.875 Async Event Request Limit: 4 00:09:49.875 Number of Firmware Slots: N/A 00:09:49.875 Firmware Slot 1 Read-Only: N/A 00:09:49.875 Firmware Activation Without Reset: N/A 00:09:49.875 Multiple Update Detection Support: N/A 00:09:49.875 Firmware Update Granularity: No Information Provided 00:09:49.875 Per-Namespace SMART Log: Yes 00:09:49.875 Asymmetric Namespace Access Log Page: Not Supported 00:09:49.875 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:49.875 Command Effects Log Page: Supported 00:09:49.875 Get Log Page Extended Data: Supported 00:09:49.875 Telemetry Log Pages: Not Supported 00:09:49.875 Persistent Event Log Pages: Not Supported 00:09:49.875 Supported Log Pages Log Page: May Support 00:09:49.875 Commands Supported & Effects Log Page: Not Supported 00:09:49.875 Feature Identifiers & Effects Log Page:May Support 00:09:49.875 NVMe-MI Commands & Effects Log Page: May Support 00:09:49.875 Data Area 4 for Telemetry Log: Not Supported 00:09:49.875 Error Log Page Entries Supported: 1 00:09:49.875 Keep Alive: Not Supported 00:09:49.875 00:09:49.875 NVM Command Set Attributes 00:09:49.875 ========================== 00:09:49.875 Submission Queue Entry Size 00:09:49.875 Max: 64 00:09:49.875 Min: 64 00:09:49.875 Completion Queue Entry Size 00:09:49.875 Max: 16 00:09:49.875 Min: 16 00:09:49.875 Number of Namespaces: 256 00:09:49.875 Compare Command: Supported 00:09:49.875 Write Uncorrectable Command: Not Supported 00:09:49.875 Dataset Management Command: Supported 00:09:49.875 Write Zeroes Command: Supported 00:09:49.875 Set Features Save Field: Supported 00:09:49.875 Reservations: Not Supported 00:09:49.875 Timestamp: Supported 00:09:49.875 Copy: Supported 00:09:49.875 Volatile Write Cache: Present 00:09:49.875 Atomic Write Unit (Normal): 1 00:09:49.875 Atomic Write Unit (PFail): 1 00:09:49.875 Atomic Compare & Write Unit: 1 00:09:49.875 Fused Compare & Write: Not Supported 00:09:49.875 Scatter-Gather List 00:09:49.875 SGL Command Set: Supported 00:09:49.875 SGL Keyed: Not Supported 00:09:49.875 SGL Bit Bucket Descriptor: Not Supported 00:09:49.875 SGL Metadata Pointer: Not Supported 00:09:49.875 Oversized SGL: Not Supported 00:09:49.875 SGL Metadata Address: Not Supported 00:09:49.875 SGL Offset: Not Supported 00:09:49.875 Transport SGL Data Block: Not Supported 00:09:49.875 Replay Protected Memory Block: Not Supported 00:09:49.875 00:09:49.875 Firmware Slot Information 00:09:49.875 ========================= 00:09:49.875 Active slot: 1 00:09:49.875 Slot 1 Firmware Revision: 1.0 00:09:49.875 00:09:49.875 00:09:49.875 Commands Supported and Effects 00:09:49.875 ============================== 00:09:49.875 Admin Commands 00:09:49.875 -------------- 00:09:49.875 Delete I/O Submission Queue (00h): Supported 00:09:49.875 Create I/O Submission Queue (01h): Supported 00:09:49.875 Get Log Page (02h): Supported 00:09:49.875 Delete I/O Completion Queue (04h): Supported 00:09:49.875 Create I/O Completion Queue (05h): Supported 00:09:49.875 Identify (06h): Supported 00:09:49.875 Abort (08h): Supported 00:09:49.875 Set Features (09h): Supported 00:09:49.875 Get Features (0Ah): Supported 00:09:49.875 Asynchronous Event Request (0Ch): Supported 00:09:49.875 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:49.875 Directive Send (19h): Supported 00:09:49.875 Directive Receive (1Ah): Supported 00:09:49.875 Virtualization Management (1Ch): Supported 00:09:49.875 Doorbell Buffer Config (7Ch): Supported 00:09:49.875 Format NVM (80h): Supported LBA-Change 00:09:49.875 I/O Commands 00:09:49.875 ------------ 00:09:49.875 Flush (00h): Supported LBA-Change 00:09:49.875 Write (01h): Supported LBA-Change 00:09:49.875 Read (02h): Supported 00:09:49.875 Compare (05h): Supported 00:09:49.875 Write Zeroes (08h): Supported LBA-Change 00:09:49.875 Dataset Management (09h): Supported LBA-Change 00:09:49.875 Unknown (0Ch): Supported 00:09:49.875 Unknown (12h): Supported 00:09:49.875 Copy (19h): Supported LBA-Change 00:09:49.875 Unknown (1Dh): Supported LBA-Change 00:09:49.875 00:09:49.875 Error Log 00:09:49.875 ========= 00:09:49.875 00:09:49.875 Arbitration 00:09:49.875 =========== 00:09:49.875 Arbitration Burst: no limit 00:09:49.875 00:09:49.875 Power Management 00:09:49.875 ================ 00:09:49.875 Number of Power States: 1 00:09:49.875 Current Power State: Power State #0 00:09:49.875 Power State #0: 00:09:49.875 Max Power: 25.00 W 00:09:49.875 Non-Operational State: Operational 00:09:49.875 Entry Latency: 16 microseconds 00:09:49.875 Exit Latency: 4 microseconds 00:09:49.875 Relative Read Throughput: 0 00:09:49.875 Relative Read Latency: 0 00:09:49.875 Relative Write Throughput: 0 00:09:49.875 Relative Write Latency: 0 00:09:49.875 Idle Power: Not Reported 00:09:49.876 Active Power: Not Reported 00:09:49.876 Non-Operational Permissive Mode: Not Supported 00:09:49.876 00:09:49.876 Health Information 00:09:49.876 ================== 00:09:49.876 Critical Warnings: 00:09:49.876 Available Spare Space: OK 00:09:49.876 Temperature: OK 00:09:49.876 Device Reliability: OK 00:09:49.876 Read Only: No 00:09:49.876 Volatile Memory Backup: OK 00:09:49.876 Current Temperature: 323 Kelvin (50 Celsius) 00:09:49.876 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:49.876 Available Spare: 0% 00:09:49.876 Available Spare Threshold: 0% 00:09:49.876 Life Percentage Used: 0% 00:09:49.876 Data Units Read: 1411 00:09:49.876 Data Units Written: 653 00:09:49.876 Host Read Commands: 58762 00:09:49.876 Host Write Commands: 28802 00:09:49.876 Controller Busy Time: 0 minutes 00:09:49.876 Power Cycles: 0 00:09:49.876 Power On Hours: 0 hours 00:09:49.876 Unsafe Shutdowns: 0 00:09:49.876 Unrecoverable Media Errors: 0 00:09:49.876 Lifetime Error Log Entries: 0 00:09:49.876 Warning Temperature Time: 0 minutes 00:09:49.876 Critical Temperature Time: 0 minutes 00:09:49.876 00:09:49.876 Number of Queues 00:09:49.876 ================ 00:09:49.876 Number of I/O Submission Queues: 64 00:09:49.876 Number of I/O Completion Queues: 64 00:09:49.876 00:09:49.876 ZNS Specific Controller Data 00:09:49.876 ============================ 00:09:49.876 Zone Append Size Limit: 0 00:09:49.876 00:09:49.876 00:09:49.876 Active Namespaces 00:09:49.876 ================= 00:09:49.876 Namespace ID:1 00:09:49.876 Error Recovery Timeout: Unlimited 00:09:49.876 Command Set Identifier: NVM (00h) 00:09:49.876 Deallocate: Supported 00:09:49.876 Deallocated/Unwritten Error: Supported 00:09:49.876 Deallocated Read Value: All 0x00 00:09:49.876 Deallocate in Write Zeroes: Not Supported 00:09:49.876 Deallocated Guard Field: 0xFFFF 00:09:49.876 Flush: Supported 00:09:49.876 Reservation: Not Supported 00:09:49.876 Namespace Sharing Capabilities: Multiple Controllers 00:09:49.876 Size (in LBAs): 262144 (1GiB) 00:09:49.876 Capacity (in LBAs): 262144 (1GiB) 00:09:49.876 Utilization (in LBAs): 262144 (1GiB) 00:09:49.876 Thin Provisioning: Not Supported 00:09:49.876 Per-NS Atomic Units: No 00:09:49.876 Maximum Single Source Range Length: 128 00:09:49.876 Maximum Copy Length: 128 00:09:49.876 Maximum Source Range Count: 128 00:09:49.876 NGUID/EUI64 Never Reused: No 00:09:49.876 Namespace Write Protected: No 00:09:49.876 Endurance group ID: 1 00:09:49.876 Number of LBA Formats: 8 00:09:49.876 Current LBA Format: LBA Format #04 00:09:49.876 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:49.876 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:49.876 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:49.876 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:49.876 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:49.876 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:49.876 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:49.876 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:49.876 00:09:49.876 Get Feature FDP: 00:09:49.876 ================ 00:09:49.876 Enabled: Yes 00:09:49.876 FDP configuration index: 0 00:09:49.876 00:09:49.876 FDP configurations log page 00:09:49.876 =========================== 00:09:49.876 Number of FDP configurations: 1 00:09:49.876 Version: 0 00:09:49.876 Size: 112 00:09:49.876 FDP Configuration Descriptor: 0 00:09:49.876 Descriptor Size: 96 00:09:49.876 Reclaim Group Identifier format: 2 00:09:49.876 FDP Volatile Write Cache: Not Present 00:09:49.876 FDP Configuration: Valid 00:09:49.876 Vendor Specific Size: 0 00:09:49.876 Number of Reclaim Groups: 2 00:09:49.876 Number of Recalim Unit Handles: 8 00:09:49.876 Max Placement Identifiers: 128 00:09:49.876 Number of Namespaces Suppprted: 256 00:09:49.876 Reclaim unit Nominal Size: 6000000 bytes 00:09:49.876 Estimated Reclaim Unit Time Limit: Not Reported 00:09:49.876 RUH Desc #000: RUH Type: Initially Isolated 00:09:49.876 RUH Desc #001: RUH Type: Initially Isolated 00:09:49.876 RUH Desc #002: RUH Type: Initially Isolated 00:09:49.876 RUH Desc #003: RUH Type: Initially Isolated 00:09:49.876 RUH Desc #004: RUH Type: Initially Isolated 00:09:49.876 RUH Desc #005: RUH Type: Initially Isolated 00:09:49.876 RUH Desc #006: RUH Type: Initially Isolated 00:09:49.876 RUH Desc #007: RUH Type: Initially Isolated 00:09:49.876 00:09:49.876 FDP reclaim unit handle usage log page 00:09:49.876 ====================================== 00:09:49.876 Number of Reclaim Unit Handles: 8 00:09:49.876 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:49.876 RUH Usage Desc #001: RUH Attributes: Unused 00:09:49.876 RUH Usage Desc #002: RUH Attributes: Unused 00:09:49.876 RUH Usage Desc #003: RUH Attributes: Unused 00:09:49.876 RUH Usage Desc #004: RUH Attributes: Unused 00:09:49.876 RUH Usage Desc #005: RUH Attributes: Unused 00:09:49.876 RUH Usage Desc #006: RUH Attributes: Unused 00:09:49.876 RUH Usage Desc #007: RUH Attributes: Unused 00:09:49.876 00:09:49.876 FDP statistics log page 00:09:49.876 ======================= 00:09:49.876 Host bytes with metadata written: 434241536 00:09:49.876 Media bytes with metadata written: 434348032 00:09:49.876 Media bytes erased: 0 00:09:49.876 00:09:49.876 FDP events log page 00:09:49.876 =================== 00:09:49.876 Number of FDP events: 0 00:09:49.876 00:09:49.876 00:09:49.876 real 0m1.123s 00:09:49.876 user 0m0.367s 00:09:49.876 sys 0m0.520s 00:09:49.876 16:19:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:49.876 16:19:09 -- common/autotest_common.sh@10 -- # set +x 00:09:49.876 ************************************ 00:09:49.876 END TEST nvme_identify 00:09:49.876 ************************************ 00:09:49.876 16:19:09 -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:49.876 16:19:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:49.876 16:19:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:49.876 16:19:09 -- common/autotest_common.sh@10 -- # set +x 00:09:49.876 ************************************ 00:09:49.876 START TEST nvme_perf 00:09:49.876 ************************************ 00:09:49.876 16:19:09 -- common/autotest_common.sh@1114 -- # nvme_perf 00:09:49.876 16:19:09 -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:51.257 Initializing NVMe Controllers 00:09:51.257 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:51.257 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:51.257 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:51.257 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:51.257 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:51.257 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:51.257 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:51.257 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:51.257 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:51.257 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:51.257 Initialization complete. Launching workers. 00:09:51.257 ======================================================== 00:09:51.257 Latency(us) 00:09:51.257 Device Information : IOPS MiB/s Average min max 00:09:51.257 PCIE (0000:00:06.0) NSID 1 from core 0: 16730.54 196.06 7647.07 4906.15 33015.91 00:09:51.257 PCIE (0000:00:07.0) NSID 1 from core 0: 16730.54 196.06 7641.97 4931.61 32019.93 00:09:51.257 PCIE (0000:00:09.0) NSID 1 from core 0: 16730.54 196.06 7635.50 4949.05 33929.77 00:09:51.257 PCIE (0000:00:08.0) NSID 1 from core 0: 16730.54 196.06 7629.19 5042.45 33982.76 00:09:51.257 PCIE (0000:00:08.0) NSID 2 from core 0: 16730.54 196.06 7622.75 4984.59 34427.85 00:09:51.257 PCIE (0000:00:08.0) NSID 3 from core 0: 16858.25 197.56 7559.09 4991.00 20678.40 00:09:51.257 ======================================================== 00:09:51.257 Total : 100510.96 1177.86 7622.51 4906.15 34427.85 00:09:51.257 00:09:51.257 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:51.257 ================================================================================= 00:09:51.257 1.00000% : 5066.437us 00:09:51.257 10.00000% : 5394.117us 00:09:51.257 25.00000% : 5797.415us 00:09:51.257 50.00000% : 6427.569us 00:09:51.257 75.00000% : 7511.434us 00:09:51.257 90.00000% : 12401.428us 00:09:51.257 95.00000% : 14216.271us 00:09:51.257 98.00000% : 16031.114us 00:09:51.257 99.00000% : 17543.483us 00:09:51.257 99.50000% : 31053.982us 00:09:51.257 99.90000% : 32667.175us 00:09:51.257 99.99000% : 33070.474us 00:09:51.257 99.99900% : 33070.474us 00:09:51.257 99.99990% : 33070.474us 00:09:51.257 99.99999% : 33070.474us 00:09:51.257 00:09:51.257 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:51.257 ================================================================================= 00:09:51.257 1.00000% : 5167.262us 00:09:51.257 10.00000% : 5494.942us 00:09:51.257 25.00000% : 5847.828us 00:09:51.257 50.00000% : 6402.363us 00:09:51.257 75.00000% : 7461.022us 00:09:51.257 90.00000% : 12351.015us 00:09:51.257 95.00000% : 14115.446us 00:09:51.257 98.00000% : 15930.289us 00:09:51.257 99.00000% : 17543.483us 00:09:51.257 99.50000% : 30045.735us 00:09:51.257 99.90000% : 31658.929us 00:09:51.257 99.99000% : 32062.228us 00:09:51.257 99.99900% : 32062.228us 00:09:51.257 99.99990% : 32062.228us 00:09:51.257 99.99999% : 32062.228us 00:09:51.257 00:09:51.257 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:51.257 ================================================================================= 00:09:51.257 1.00000% : 5167.262us 00:09:51.257 10.00000% : 5494.942us 00:09:51.257 25.00000% : 5847.828us 00:09:51.257 50.00000% : 6402.363us 00:09:51.257 75.00000% : 7561.846us 00:09:51.257 90.00000% : 12300.603us 00:09:51.257 95.00000% : 13812.972us 00:09:51.257 98.00000% : 15426.166us 00:09:51.257 99.00000% : 18652.554us 00:09:51.257 99.50000% : 32062.228us 00:09:51.257 99.90000% : 33675.422us 00:09:51.257 99.99000% : 34078.720us 00:09:51.257 99.99900% : 34078.720us 00:09:51.257 99.99990% : 34078.720us 00:09:51.257 99.99999% : 34078.720us 00:09:51.257 00:09:51.257 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:51.257 ================================================================================= 00:09:51.257 1.00000% : 5192.468us 00:09:51.257 10.00000% : 5494.942us 00:09:51.257 25.00000% : 5873.034us 00:09:51.257 50.00000% : 6427.569us 00:09:51.257 75.00000% : 7662.671us 00:09:51.257 90.00000% : 12250.191us 00:09:51.257 95.00000% : 13712.148us 00:09:51.257 98.00000% : 15123.692us 00:09:51.257 99.00000% : 19055.852us 00:09:51.257 99.50000% : 32062.228us 00:09:51.257 99.90000% : 33675.422us 00:09:51.257 99.99000% : 34078.720us 00:09:51.258 99.99900% : 34078.720us 00:09:51.258 99.99990% : 34078.720us 00:09:51.258 99.99999% : 34078.720us 00:09:51.258 00:09:51.258 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:51.258 ================================================================================= 00:09:51.258 1.00000% : 5192.468us 00:09:51.258 10.00000% : 5494.942us 00:09:51.258 25.00000% : 5847.828us 00:09:51.258 50.00000% : 6427.569us 00:09:51.258 75.00000% : 7662.671us 00:09:51.258 90.00000% : 11998.129us 00:09:51.258 95.00000% : 13812.972us 00:09:51.258 98.00000% : 15526.991us 00:09:51.258 99.00000% : 18450.905us 00:09:51.258 99.50000% : 32465.526us 00:09:51.258 99.90000% : 34078.720us 00:09:51.258 99.99000% : 34482.018us 00:09:51.258 99.99900% : 34482.018us 00:09:51.258 99.99990% : 34482.018us 00:09:51.258 99.99999% : 34482.018us 00:09:51.258 00:09:51.258 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:51.258 ================================================================================= 00:09:51.258 1.00000% : 5217.674us 00:09:51.258 10.00000% : 5494.942us 00:09:51.258 25.00000% : 5847.828us 00:09:51.258 50.00000% : 6402.363us 00:09:51.258 75.00000% : 7763.495us 00:09:51.258 90.00000% : 12149.366us 00:09:51.258 95.00000% : 13712.148us 00:09:51.258 98.00000% : 16131.938us 00:09:51.258 99.00000% : 17644.308us 00:09:51.258 99.50000% : 18753.378us 00:09:51.258 99.90000% : 20366.572us 00:09:51.258 99.99000% : 20669.046us 00:09:51.258 99.99900% : 20769.871us 00:09:51.258 99.99990% : 20769.871us 00:09:51.258 99.99999% : 20769.871us 00:09:51.258 00:09:51.258 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:51.258 ============================================================================== 00:09:51.258 Range in us Cumulative IO count 00:09:51.258 4889.994 - 4915.200: 0.0179% ( 3) 00:09:51.258 4915.200 - 4940.406: 0.0716% ( 9) 00:09:51.258 4940.406 - 4965.612: 0.1610% ( 15) 00:09:51.258 4965.612 - 4990.818: 0.2445% ( 14) 00:09:51.258 4990.818 - 5016.025: 0.4652% ( 37) 00:09:51.258 5016.025 - 5041.231: 0.7276% ( 44) 00:09:51.258 5041.231 - 5066.437: 1.0735% ( 58) 00:09:51.258 5066.437 - 5091.643: 1.4015% ( 55) 00:09:51.258 5091.643 - 5116.849: 1.8965% ( 83) 00:09:51.258 5116.849 - 5142.055: 2.4809% ( 98) 00:09:51.258 5142.055 - 5167.262: 3.1966% ( 120) 00:09:51.258 5167.262 - 5192.468: 3.8168% ( 104) 00:09:51.258 5192.468 - 5217.674: 4.4370% ( 104) 00:09:51.258 5217.674 - 5242.880: 5.1885% ( 126) 00:09:51.258 5242.880 - 5268.086: 5.9816% ( 133) 00:09:51.258 5268.086 - 5293.292: 6.8583% ( 147) 00:09:51.258 5293.292 - 5318.498: 7.6813% ( 138) 00:09:51.258 5318.498 - 5343.705: 8.5878% ( 152) 00:09:51.258 5343.705 - 5368.911: 9.4346% ( 142) 00:09:51.258 5368.911 - 5394.117: 10.3113% ( 147) 00:09:51.258 5394.117 - 5419.323: 11.2118% ( 151) 00:09:51.258 5419.323 - 5444.529: 12.1183% ( 152) 00:09:51.258 5444.529 - 5469.735: 13.0069% ( 149) 00:09:51.258 5469.735 - 5494.942: 13.8478% ( 141) 00:09:51.258 5494.942 - 5520.148: 14.7960% ( 159) 00:09:51.258 5520.148 - 5545.354: 15.6310% ( 140) 00:09:51.258 5545.354 - 5570.560: 16.5434% ( 153) 00:09:51.258 5570.560 - 5595.766: 17.5155% ( 163) 00:09:51.258 5595.766 - 5620.972: 18.4936% ( 164) 00:09:51.258 5620.972 - 5646.178: 19.3702% ( 147) 00:09:51.258 5646.178 - 5671.385: 20.3364% ( 162) 00:09:51.258 5671.385 - 5696.591: 21.2548% ( 154) 00:09:51.258 5696.591 - 5721.797: 22.2030% ( 159) 00:09:51.258 5721.797 - 5747.003: 23.1811% ( 164) 00:09:51.258 5747.003 - 5772.209: 24.0100% ( 139) 00:09:51.258 5772.209 - 5797.415: 25.0716% ( 178) 00:09:51.258 5797.415 - 5822.622: 26.0317% ( 161) 00:09:51.258 5822.622 - 5847.828: 26.9800% ( 159) 00:09:51.258 5847.828 - 5873.034: 27.9819% ( 168) 00:09:51.258 5873.034 - 5898.240: 28.9241% ( 158) 00:09:51.258 5898.240 - 5923.446: 30.0155% ( 183) 00:09:51.258 5923.446 - 5948.652: 30.9518% ( 157) 00:09:51.258 5948.652 - 5973.858: 31.9358% ( 165) 00:09:51.258 5973.858 - 5999.065: 32.9079% ( 163) 00:09:51.258 5999.065 - 6024.271: 33.9396% ( 173) 00:09:51.258 6024.271 - 6049.477: 34.9714% ( 173) 00:09:51.258 6049.477 - 6074.683: 35.9554% ( 165) 00:09:51.258 6074.683 - 6099.889: 36.9334% ( 164) 00:09:51.258 6099.889 - 6125.095: 37.9652% ( 173) 00:09:51.258 6125.095 - 6150.302: 38.9730% ( 169) 00:09:51.258 6150.302 - 6175.508: 39.9869% ( 170) 00:09:51.258 6175.508 - 6200.714: 40.9769% ( 166) 00:09:51.258 6200.714 - 6225.920: 42.0444% ( 179) 00:09:51.258 6225.920 - 6251.126: 43.0284% ( 165) 00:09:51.258 6251.126 - 6276.332: 43.9707% ( 158) 00:09:51.258 6276.332 - 6301.538: 45.0620% ( 183) 00:09:51.258 6301.538 - 6326.745: 46.0580% ( 167) 00:09:51.258 6326.745 - 6351.951: 47.0778% ( 171) 00:09:51.258 6351.951 - 6377.157: 48.1035% ( 172) 00:09:51.258 6377.157 - 6402.363: 49.0935% ( 166) 00:09:51.258 6402.363 - 6427.569: 50.1551% ( 178) 00:09:51.258 6427.569 - 6452.775: 51.1271% ( 163) 00:09:51.258 6452.775 - 6503.188: 53.1429% ( 338) 00:09:51.258 6503.188 - 6553.600: 55.2004% ( 345) 00:09:51.258 6553.600 - 6604.012: 57.2519% ( 344) 00:09:51.258 6604.012 - 6654.425: 59.2021% ( 327) 00:09:51.258 6654.425 - 6704.837: 61.2774% ( 348) 00:09:51.258 6704.837 - 6755.249: 63.2276% ( 327) 00:09:51.258 6755.249 - 6805.662: 65.0107% ( 299) 00:09:51.258 6805.662 - 6856.074: 66.7402% ( 290) 00:09:51.258 6856.074 - 6906.486: 68.2013% ( 245) 00:09:51.258 6906.486 - 6956.898: 69.4597% ( 211) 00:09:51.258 6956.898 - 7007.311: 70.5153% ( 177) 00:09:51.258 7007.311 - 7057.723: 71.4933% ( 164) 00:09:51.258 7057.723 - 7108.135: 72.2865% ( 133) 00:09:51.258 7108.135 - 7158.548: 72.7576% ( 79) 00:09:51.258 7158.548 - 7208.960: 73.2228% ( 78) 00:09:51.258 7208.960 - 7259.372: 73.6403% ( 70) 00:09:51.258 7259.372 - 7309.785: 73.9981% ( 60) 00:09:51.258 7309.785 - 7360.197: 74.3142% ( 53) 00:09:51.258 7360.197 - 7410.609: 74.6004% ( 48) 00:09:51.258 7410.609 - 7461.022: 74.9165% ( 53) 00:09:51.258 7461.022 - 7511.434: 75.1789% ( 44) 00:09:51.258 7511.434 - 7561.846: 75.4473% ( 45) 00:09:51.258 7561.846 - 7612.258: 75.6500% ( 34) 00:09:51.258 7612.258 - 7662.671: 75.8528% ( 34) 00:09:51.258 7662.671 - 7713.083: 76.0437% ( 32) 00:09:51.258 7713.083 - 7763.495: 76.2226% ( 30) 00:09:51.258 7763.495 - 7813.908: 76.3955% ( 29) 00:09:51.258 7813.908 - 7864.320: 76.5923% ( 33) 00:09:51.258 7864.320 - 7914.732: 76.7533% ( 27) 00:09:51.258 7914.732 - 7965.145: 76.9203% ( 28) 00:09:51.258 7965.145 - 8015.557: 77.0396% ( 20) 00:09:51.258 8015.557 - 8065.969: 77.2185% ( 30) 00:09:51.258 8065.969 - 8116.382: 77.3676% ( 25) 00:09:51.258 8116.382 - 8166.794: 77.5048% ( 23) 00:09:51.258 8166.794 - 8217.206: 77.6718% ( 28) 00:09:51.258 8217.206 - 8267.618: 77.8208% ( 25) 00:09:51.258 8267.618 - 8318.031: 77.9521% ( 22) 00:09:51.258 8318.031 - 8368.443: 78.0594% ( 18) 00:09:51.258 8368.443 - 8418.855: 78.1727% ( 19) 00:09:51.258 8418.855 - 8469.268: 78.3039% ( 22) 00:09:51.258 8469.268 - 8519.680: 78.4172% ( 19) 00:09:51.258 8519.680 - 8570.092: 78.5067% ( 15) 00:09:51.258 8570.092 - 8620.505: 78.6081% ( 17) 00:09:51.258 8620.505 - 8670.917: 78.7452% ( 23) 00:09:51.258 8670.917 - 8721.329: 78.8347% ( 15) 00:09:51.258 8721.329 - 8771.742: 78.9599% ( 21) 00:09:51.258 8771.742 - 8822.154: 79.0911% ( 22) 00:09:51.258 8822.154 - 8872.566: 79.1985% ( 18) 00:09:51.258 8872.566 - 8922.978: 79.3237% ( 21) 00:09:51.258 8922.978 - 8973.391: 79.4430% ( 20) 00:09:51.258 8973.391 - 9023.803: 79.5861% ( 24) 00:09:51.258 9023.803 - 9074.215: 79.7054% ( 20) 00:09:51.258 9074.215 - 9124.628: 79.8247% ( 20) 00:09:51.258 9124.628 - 9175.040: 79.9678% ( 24) 00:09:51.258 9175.040 - 9225.452: 80.0871% ( 20) 00:09:51.258 9225.452 - 9275.865: 80.2123% ( 21) 00:09:51.258 9275.865 - 9326.277: 80.3316% ( 20) 00:09:51.258 9326.277 - 9376.689: 80.4449% ( 19) 00:09:51.258 9376.689 - 9427.102: 80.5522% ( 18) 00:09:51.258 9427.102 - 9477.514: 80.6596% ( 18) 00:09:51.258 9477.514 - 9527.926: 80.7729% ( 19) 00:09:51.258 9527.926 - 9578.338: 80.8743% ( 17) 00:09:51.258 9578.338 - 9628.751: 80.9578% ( 14) 00:09:51.258 9628.751 - 9679.163: 81.0771% ( 20) 00:09:51.258 9679.163 - 9729.575: 81.1546% ( 13) 00:09:51.258 9729.575 - 9779.988: 81.2917% ( 23) 00:09:51.258 9779.988 - 9830.400: 81.4051% ( 19) 00:09:51.258 9830.400 - 9880.812: 81.5542% ( 25) 00:09:51.258 9880.812 - 9931.225: 81.7032% ( 25) 00:09:51.258 9931.225 - 9981.637: 81.8285% ( 21) 00:09:51.258 9981.637 - 10032.049: 81.9418% ( 19) 00:09:51.258 10032.049 - 10082.462: 82.0551% ( 19) 00:09:51.258 10082.462 - 10132.874: 82.1505% ( 16) 00:09:51.258 10132.874 - 10183.286: 82.2996% ( 25) 00:09:51.258 10183.286 - 10233.698: 82.4249% ( 21) 00:09:51.258 10233.698 - 10284.111: 82.5680% ( 24) 00:09:51.258 10284.111 - 10334.523: 82.7052% ( 23) 00:09:51.258 10334.523 - 10384.935: 82.8483% ( 24) 00:09:51.258 10384.935 - 10435.348: 82.9616% ( 19) 00:09:51.258 10435.348 - 10485.760: 83.1226% ( 27) 00:09:51.258 10485.760 - 10536.172: 83.2359% ( 19) 00:09:51.258 10536.172 - 10586.585: 83.3850% ( 25) 00:09:51.258 10586.585 - 10636.997: 83.4924% ( 18) 00:09:51.258 10636.997 - 10687.409: 83.6951% ( 34) 00:09:51.258 10687.409 - 10737.822: 83.9933% ( 50) 00:09:51.258 10737.822 - 10788.234: 84.0828% ( 15) 00:09:51.258 10788.234 - 10838.646: 84.2498% ( 28) 00:09:51.258 10838.646 - 10889.058: 84.3690% ( 20) 00:09:51.258 10889.058 - 10939.471: 84.6076% ( 40) 00:09:51.258 10939.471 - 10989.883: 84.8402% ( 39) 00:09:51.258 10989.883 - 11040.295: 85.0131% ( 29) 00:09:51.258 11040.295 - 11090.708: 85.1324% ( 20) 00:09:51.259 11090.708 - 11141.120: 85.3292% ( 33) 00:09:51.259 11141.120 - 11191.532: 85.4723% ( 24) 00:09:51.259 11191.532 - 11241.945: 85.7049% ( 39) 00:09:51.259 11241.945 - 11292.357: 85.8779% ( 29) 00:09:51.259 11292.357 - 11342.769: 86.0985% ( 37) 00:09:51.259 11342.769 - 11393.182: 86.2655% ( 28) 00:09:51.259 11393.182 - 11443.594: 86.4086% ( 24) 00:09:51.259 11443.594 - 11494.006: 86.5577% ( 25) 00:09:51.259 11494.006 - 11544.418: 86.7128% ( 26) 00:09:51.259 11544.418 - 11594.831: 86.9931% ( 47) 00:09:51.259 11594.831 - 11645.243: 87.2376% ( 41) 00:09:51.259 11645.243 - 11695.655: 87.3927% ( 26) 00:09:51.259 11695.655 - 11746.068: 87.5716% ( 30) 00:09:51.259 11746.068 - 11796.480: 87.7624% ( 32) 00:09:51.259 11796.480 - 11846.892: 87.9234% ( 27) 00:09:51.259 11846.892 - 11897.305: 88.2276% ( 51) 00:09:51.259 11897.305 - 11947.717: 88.4423% ( 36) 00:09:51.259 11947.717 - 11998.129: 88.6450% ( 34) 00:09:51.259 11998.129 - 12048.542: 88.8478% ( 34) 00:09:51.259 12048.542 - 12098.954: 88.9909% ( 24) 00:09:51.259 12098.954 - 12149.366: 89.1698% ( 30) 00:09:51.259 12149.366 - 12199.778: 89.3488% ( 30) 00:09:51.259 12199.778 - 12250.191: 89.5217% ( 29) 00:09:51.259 12250.191 - 12300.603: 89.7603% ( 40) 00:09:51.259 12300.603 - 12351.015: 89.9034% ( 24) 00:09:51.259 12351.015 - 12401.428: 90.1240% ( 37) 00:09:51.259 12401.428 - 12451.840: 90.3208% ( 33) 00:09:51.259 12451.840 - 12502.252: 90.5117% ( 32) 00:09:51.259 12502.252 - 12552.665: 90.7323% ( 37) 00:09:51.259 12552.665 - 12603.077: 90.9113% ( 30) 00:09:51.259 12603.077 - 12653.489: 91.1140% ( 34) 00:09:51.259 12653.489 - 12703.902: 91.2810% ( 28) 00:09:51.259 12703.902 - 12754.314: 91.5375% ( 43) 00:09:51.259 12754.314 - 12804.726: 91.7521% ( 36) 00:09:51.259 12804.726 - 12855.138: 91.8833% ( 22) 00:09:51.259 12855.138 - 12905.551: 92.0503% ( 28) 00:09:51.259 12905.551 - 13006.375: 92.2889% ( 40) 00:09:51.259 13006.375 - 13107.200: 92.6229% ( 56) 00:09:51.259 13107.200 - 13208.025: 92.8614% ( 40) 00:09:51.259 13208.025 - 13308.849: 93.0761% ( 36) 00:09:51.259 13308.849 - 13409.674: 93.3385% ( 44) 00:09:51.259 13409.674 - 13510.498: 93.5651% ( 38) 00:09:51.259 13510.498 - 13611.323: 93.7679% ( 34) 00:09:51.259 13611.323 - 13712.148: 94.0720% ( 51) 00:09:51.259 13712.148 - 13812.972: 94.2808% ( 35) 00:09:51.259 13812.972 - 13913.797: 94.4716% ( 32) 00:09:51.259 13913.797 - 14014.622: 94.8235% ( 59) 00:09:51.259 14014.622 - 14115.446: 94.9845% ( 27) 00:09:51.259 14115.446 - 14216.271: 95.1992% ( 36) 00:09:51.259 14216.271 - 14317.095: 95.3662% ( 28) 00:09:51.259 14317.095 - 14417.920: 95.5689% ( 34) 00:09:51.259 14417.920 - 14518.745: 95.7121% ( 24) 00:09:51.259 14518.745 - 14619.569: 95.8612% ( 25) 00:09:51.259 14619.569 - 14720.394: 96.0341% ( 29) 00:09:51.259 14720.394 - 14821.218: 96.2011% ( 28) 00:09:51.259 14821.218 - 14922.043: 96.3502% ( 25) 00:09:51.259 14922.043 - 15022.868: 96.4754% ( 21) 00:09:51.259 15022.868 - 15123.692: 96.6961% ( 37) 00:09:51.259 15123.692 - 15224.517: 96.8452% ( 25) 00:09:51.259 15224.517 - 15325.342: 97.0181% ( 29) 00:09:51.259 15325.342 - 15426.166: 97.1792% ( 27) 00:09:51.259 15426.166 - 15526.991: 97.3342% ( 26) 00:09:51.259 15526.991 - 15627.815: 97.5072% ( 29) 00:09:51.259 15627.815 - 15728.640: 97.6503% ( 24) 00:09:51.259 15728.640 - 15829.465: 97.7755% ( 21) 00:09:51.259 15829.465 - 15930.289: 97.8888% ( 19) 00:09:51.259 15930.289 - 16031.114: 98.0021% ( 19) 00:09:51.259 16031.114 - 16131.938: 98.1214% ( 20) 00:09:51.259 16131.938 - 16232.763: 98.2526% ( 22) 00:09:51.259 16232.763 - 16333.588: 98.3480% ( 16) 00:09:51.259 16333.588 - 16434.412: 98.4077% ( 10) 00:09:51.259 16434.412 - 16535.237: 98.4673% ( 10) 00:09:51.259 16535.237 - 16636.062: 98.5329% ( 11) 00:09:51.259 16636.062 - 16736.886: 98.5926% ( 10) 00:09:51.259 16736.886 - 16837.711: 98.6582% ( 11) 00:09:51.259 16837.711 - 16938.535: 98.7059% ( 8) 00:09:51.259 16938.535 - 17039.360: 98.8013% ( 16) 00:09:51.259 17039.360 - 17140.185: 98.8311% ( 5) 00:09:51.259 17140.185 - 17241.009: 98.8967% ( 11) 00:09:51.259 17241.009 - 17341.834: 98.9265% ( 5) 00:09:51.259 17341.834 - 17442.658: 98.9802% ( 9) 00:09:51.259 17442.658 - 17543.483: 99.0160% ( 6) 00:09:51.259 17543.483 - 17644.308: 99.0458% ( 5) 00:09:51.259 17644.308 - 17745.132: 99.0697% ( 4) 00:09:51.259 17745.132 - 17845.957: 99.1054% ( 6) 00:09:51.259 17845.957 - 17946.782: 99.1472% ( 7) 00:09:51.259 17946.782 - 18047.606: 99.1830% ( 6) 00:09:51.259 18047.606 - 18148.431: 99.2128% ( 5) 00:09:51.259 18148.431 - 18249.255: 99.2366% ( 4) 00:09:51.259 29642.437 - 29844.086: 99.2665% ( 5) 00:09:51.259 29844.086 - 30045.735: 99.2963% ( 5) 00:09:51.259 30045.735 - 30247.385: 99.3380% ( 7) 00:09:51.259 30247.385 - 30449.034: 99.4036% ( 11) 00:09:51.259 30449.034 - 30650.683: 99.4454% ( 7) 00:09:51.259 30650.683 - 30852.332: 99.4990% ( 9) 00:09:51.259 30852.332 - 31053.982: 99.5468% ( 8) 00:09:51.259 31053.982 - 31255.631: 99.5945% ( 8) 00:09:51.259 31255.631 - 31457.280: 99.6302% ( 6) 00:09:51.259 31457.280 - 31658.929: 99.6780% ( 8) 00:09:51.259 31658.929 - 31860.578: 99.7257% ( 8) 00:09:51.259 31860.578 - 32062.228: 99.7793% ( 9) 00:09:51.259 32062.228 - 32263.877: 99.8271% ( 8) 00:09:51.259 32263.877 - 32465.526: 99.8688% ( 7) 00:09:51.259 32465.526 - 32667.175: 99.9105% ( 7) 00:09:51.259 32667.175 - 32868.825: 99.9642% ( 9) 00:09:51.259 32868.825 - 33070.474: 100.0000% ( 6) 00:09:51.259 00:09:51.259 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:51.259 ============================================================================== 00:09:51.259 Range in us Cumulative IO count 00:09:51.259 4915.200 - 4940.406: 0.0119% ( 2) 00:09:51.259 4940.406 - 4965.612: 0.0358% ( 4) 00:09:51.259 4965.612 - 4990.818: 0.0537% ( 3) 00:09:51.259 4990.818 - 5016.025: 0.0716% ( 3) 00:09:51.259 5016.025 - 5041.231: 0.1431% ( 12) 00:09:51.259 5041.231 - 5066.437: 0.2087% ( 11) 00:09:51.259 5066.437 - 5091.643: 0.3698% ( 27) 00:09:51.259 5091.643 - 5116.849: 0.5904% ( 37) 00:09:51.259 5116.849 - 5142.055: 0.8469% ( 43) 00:09:51.259 5142.055 - 5167.262: 1.2106% ( 61) 00:09:51.259 5167.262 - 5192.468: 1.6341% ( 71) 00:09:51.259 5192.468 - 5217.674: 1.9800% ( 58) 00:09:51.259 5217.674 - 5242.880: 2.4690% ( 82) 00:09:51.259 5242.880 - 5268.086: 3.0296% ( 94) 00:09:51.259 5268.086 - 5293.292: 3.7333% ( 118) 00:09:51.259 5293.292 - 5318.498: 4.4370% ( 118) 00:09:51.259 5318.498 - 5343.705: 5.1765% ( 124) 00:09:51.259 5343.705 - 5368.911: 5.9995% ( 138) 00:09:51.259 5368.911 - 5394.117: 6.8106% ( 136) 00:09:51.259 5394.117 - 5419.323: 7.6753% ( 145) 00:09:51.259 5419.323 - 5444.529: 8.5699% ( 150) 00:09:51.259 5444.529 - 5469.735: 9.5778% ( 169) 00:09:51.259 5469.735 - 5494.942: 10.6333% ( 177) 00:09:51.259 5494.942 - 5520.148: 11.5756% ( 158) 00:09:51.259 5520.148 - 5545.354: 12.5477% ( 163) 00:09:51.259 5545.354 - 5570.560: 13.5198% ( 163) 00:09:51.259 5570.560 - 5595.766: 14.5575% ( 174) 00:09:51.259 5595.766 - 5620.972: 15.5952% ( 174) 00:09:51.259 5620.972 - 5646.178: 16.7164% ( 188) 00:09:51.259 5646.178 - 5671.385: 17.7481% ( 173) 00:09:51.259 5671.385 - 5696.591: 18.8275% ( 181) 00:09:51.259 5696.591 - 5721.797: 19.8533% ( 172) 00:09:51.259 5721.797 - 5747.003: 20.9029% ( 176) 00:09:51.259 5747.003 - 5772.209: 22.0181% ( 187) 00:09:51.259 5772.209 - 5797.415: 23.1095% ( 183) 00:09:51.259 5797.415 - 5822.622: 24.1949% ( 182) 00:09:51.259 5822.622 - 5847.828: 25.3101% ( 187) 00:09:51.259 5847.828 - 5873.034: 26.4552% ( 192) 00:09:51.259 5873.034 - 5898.240: 27.5942% ( 191) 00:09:51.259 5898.240 - 5923.446: 28.7154% ( 188) 00:09:51.259 5923.446 - 5948.652: 29.8306% ( 187) 00:09:51.259 5948.652 - 5973.858: 30.9399% ( 186) 00:09:51.259 5973.858 - 5999.065: 32.0730% ( 190) 00:09:51.259 5999.065 - 6024.271: 33.2180% ( 192) 00:09:51.259 6024.271 - 6049.477: 34.3929% ( 197) 00:09:51.259 6049.477 - 6074.683: 35.5379% ( 192) 00:09:51.259 6074.683 - 6099.889: 36.6293% ( 183) 00:09:51.259 6099.889 - 6125.095: 37.7684% ( 191) 00:09:51.259 6125.095 - 6150.302: 38.8896% ( 188) 00:09:51.259 6150.302 - 6175.508: 40.0465% ( 194) 00:09:51.259 6175.508 - 6200.714: 41.1796% ( 190) 00:09:51.259 6200.714 - 6225.920: 42.3426% ( 195) 00:09:51.259 6225.920 - 6251.126: 43.4995% ( 194) 00:09:51.259 6251.126 - 6276.332: 44.6803% ( 198) 00:09:51.259 6276.332 - 6301.538: 45.8433% ( 195) 00:09:51.259 6301.538 - 6326.745: 47.0420% ( 201) 00:09:51.259 6326.745 - 6351.951: 48.2586% ( 204) 00:09:51.259 6351.951 - 6377.157: 49.4036% ( 192) 00:09:51.259 6377.157 - 6402.363: 50.5725% ( 196) 00:09:51.259 6402.363 - 6427.569: 51.7712% ( 201) 00:09:51.259 6427.569 - 6452.775: 52.9401% ( 196) 00:09:51.259 6452.775 - 6503.188: 55.3495% ( 404) 00:09:51.259 6503.188 - 6553.600: 57.5978% ( 377) 00:09:51.259 6553.600 - 6604.012: 59.7626% ( 363) 00:09:51.259 6604.012 - 6654.425: 61.7486% ( 333) 00:09:51.259 6654.425 - 6704.837: 63.5437% ( 301) 00:09:51.259 6704.837 - 6755.249: 65.1419% ( 268) 00:09:51.259 6755.249 - 6805.662: 66.6567% ( 254) 00:09:51.259 6805.662 - 6856.074: 68.0880% ( 240) 00:09:51.259 6856.074 - 6906.486: 69.3643% ( 214) 00:09:51.259 6906.486 - 6956.898: 70.4377% ( 180) 00:09:51.259 6956.898 - 7007.311: 71.2667% ( 139) 00:09:51.259 7007.311 - 7057.723: 71.9227% ( 110) 00:09:51.259 7057.723 - 7108.135: 72.4833% ( 94) 00:09:51.259 7108.135 - 7158.548: 72.9962% ( 86) 00:09:51.259 7158.548 - 7208.960: 73.4017% ( 68) 00:09:51.259 7208.960 - 7259.372: 73.8311% ( 72) 00:09:51.259 7259.372 - 7309.785: 74.1949% ( 61) 00:09:51.259 7309.785 - 7360.197: 74.5289% ( 56) 00:09:51.259 7360.197 - 7410.609: 74.8628% ( 56) 00:09:51.260 7410.609 - 7461.022: 75.1491% ( 48) 00:09:51.260 7461.022 - 7511.434: 75.4234% ( 46) 00:09:51.260 7511.434 - 7561.846: 75.6858% ( 44) 00:09:51.260 7561.846 - 7612.258: 75.9005% ( 36) 00:09:51.260 7612.258 - 7662.671: 76.0914% ( 32) 00:09:51.260 7662.671 - 7713.083: 76.3180% ( 38) 00:09:51.260 7713.083 - 7763.495: 76.5267% ( 35) 00:09:51.260 7763.495 - 7813.908: 76.7295% ( 34) 00:09:51.260 7813.908 - 7864.320: 76.9084% ( 30) 00:09:51.260 7864.320 - 7914.732: 77.0575% ( 25) 00:09:51.260 7914.732 - 7965.145: 77.1947% ( 23) 00:09:51.260 7965.145 - 8015.557: 77.3378% ( 24) 00:09:51.260 8015.557 - 8065.969: 77.4988% ( 27) 00:09:51.260 8065.969 - 8116.382: 77.6658% ( 28) 00:09:51.260 8116.382 - 8166.794: 77.8030% ( 23) 00:09:51.260 8166.794 - 8217.206: 77.9401% ( 23) 00:09:51.260 8217.206 - 8267.618: 78.0654% ( 21) 00:09:51.260 8267.618 - 8318.031: 78.1727% ( 18) 00:09:51.260 8318.031 - 8368.443: 78.2801% ( 18) 00:09:51.260 8368.443 - 8418.855: 78.3934% ( 19) 00:09:51.260 8418.855 - 8469.268: 78.5067% ( 19) 00:09:51.260 8469.268 - 8519.680: 78.6200% ( 19) 00:09:51.260 8519.680 - 8570.092: 78.7333% ( 19) 00:09:51.260 8570.092 - 8620.505: 78.8466% ( 19) 00:09:51.260 8620.505 - 8670.917: 78.9719% ( 21) 00:09:51.260 8670.917 - 8721.329: 79.0971% ( 21) 00:09:51.260 8721.329 - 8771.742: 79.2044% ( 18) 00:09:51.260 8771.742 - 8822.154: 79.3416% ( 23) 00:09:51.260 8822.154 - 8872.566: 79.4668% ( 21) 00:09:51.260 8872.566 - 8922.978: 79.5861% ( 20) 00:09:51.260 8922.978 - 8973.391: 79.7054% ( 20) 00:09:51.260 8973.391 - 9023.803: 79.8426% ( 23) 00:09:51.260 9023.803 - 9074.215: 79.9618% ( 20) 00:09:51.260 9074.215 - 9124.628: 80.0930% ( 22) 00:09:51.260 9124.628 - 9175.040: 80.2183% ( 21) 00:09:51.260 9175.040 - 9225.452: 80.3495% ( 22) 00:09:51.260 9225.452 - 9275.865: 80.4449% ( 16) 00:09:51.260 9275.865 - 9326.277: 80.5344% ( 15) 00:09:51.260 9326.277 - 9376.689: 80.6417% ( 18) 00:09:51.260 9376.689 - 9427.102: 80.7312% ( 15) 00:09:51.260 9427.102 - 9477.514: 80.8087% ( 13) 00:09:51.260 9477.514 - 9527.926: 80.8862% ( 13) 00:09:51.260 9527.926 - 9578.338: 80.9697% ( 14) 00:09:51.260 9578.338 - 9628.751: 81.0592% ( 15) 00:09:51.260 9628.751 - 9679.163: 81.1427% ( 14) 00:09:51.260 9679.163 - 9729.575: 81.2083% ( 11) 00:09:51.260 9729.575 - 9779.988: 81.2679% ( 10) 00:09:51.260 9779.988 - 9830.400: 81.3335% ( 11) 00:09:51.260 9830.400 - 9880.812: 81.4468% ( 19) 00:09:51.260 9880.812 - 9931.225: 81.5303% ( 14) 00:09:51.260 9931.225 - 9981.637: 81.6317% ( 17) 00:09:51.260 9981.637 - 10032.049: 81.7331% ( 17) 00:09:51.260 10032.049 - 10082.462: 81.8285% ( 16) 00:09:51.260 10082.462 - 10132.874: 81.9418% ( 19) 00:09:51.260 10132.874 - 10183.286: 82.0312% ( 15) 00:09:51.260 10183.286 - 10233.698: 82.1267% ( 16) 00:09:51.260 10233.698 - 10284.111: 82.2400% ( 19) 00:09:51.260 10284.111 - 10334.523: 82.3652% ( 21) 00:09:51.260 10334.523 - 10384.935: 82.5083% ( 24) 00:09:51.260 10384.935 - 10435.348: 82.6574% ( 25) 00:09:51.260 10435.348 - 10485.760: 82.7946% ( 23) 00:09:51.260 10485.760 - 10536.172: 82.9377% ( 24) 00:09:51.260 10536.172 - 10586.585: 83.1167% ( 30) 00:09:51.260 10586.585 - 10636.997: 83.2836% ( 28) 00:09:51.260 10636.997 - 10687.409: 83.4625% ( 30) 00:09:51.260 10687.409 - 10737.822: 83.6653% ( 34) 00:09:51.260 10737.822 - 10788.234: 83.8800% ( 36) 00:09:51.260 10788.234 - 10838.646: 84.0768% ( 33) 00:09:51.260 10838.646 - 10889.058: 84.2677% ( 32) 00:09:51.260 10889.058 - 10939.471: 84.4704% ( 34) 00:09:51.260 10939.471 - 10989.883: 84.6851% ( 36) 00:09:51.260 10989.883 - 11040.295: 84.8938% ( 35) 00:09:51.260 11040.295 - 11090.708: 85.1085% ( 36) 00:09:51.260 11090.708 - 11141.120: 85.3292% ( 37) 00:09:51.260 11141.120 - 11191.532: 85.5439% ( 36) 00:09:51.260 11191.532 - 11241.945: 85.7407% ( 33) 00:09:51.260 11241.945 - 11292.357: 85.9554% ( 36) 00:09:51.260 11292.357 - 11342.769: 86.1641% ( 35) 00:09:51.260 11342.769 - 11393.182: 86.3669% ( 34) 00:09:51.260 11393.182 - 11443.594: 86.5518% ( 31) 00:09:51.260 11443.594 - 11494.006: 86.7545% ( 34) 00:09:51.260 11494.006 - 11544.418: 86.9633% ( 35) 00:09:51.260 11544.418 - 11594.831: 87.2257% ( 44) 00:09:51.260 11594.831 - 11645.243: 87.4463% ( 37) 00:09:51.260 11645.243 - 11695.655: 87.6670% ( 37) 00:09:51.260 11695.655 - 11746.068: 87.8698% ( 34) 00:09:51.260 11746.068 - 11796.480: 88.0487% ( 30) 00:09:51.260 11796.480 - 11846.892: 88.2574% ( 35) 00:09:51.260 11846.892 - 11897.305: 88.4423% ( 31) 00:09:51.260 11897.305 - 11947.717: 88.6271% ( 31) 00:09:51.260 11947.717 - 11998.129: 88.8240% ( 33) 00:09:51.260 11998.129 - 12048.542: 89.0208% ( 33) 00:09:51.260 12048.542 - 12098.954: 89.2176% ( 33) 00:09:51.260 12098.954 - 12149.366: 89.4024% ( 31) 00:09:51.260 12149.366 - 12199.778: 89.5873% ( 31) 00:09:51.260 12199.778 - 12250.191: 89.7781% ( 32) 00:09:51.260 12250.191 - 12300.603: 89.9392% ( 27) 00:09:51.260 12300.603 - 12351.015: 90.1002% ( 27) 00:09:51.260 12351.015 - 12401.428: 90.2552% ( 26) 00:09:51.260 12401.428 - 12451.840: 90.4461% ( 32) 00:09:51.260 12451.840 - 12502.252: 90.6011% ( 26) 00:09:51.260 12502.252 - 12552.665: 90.7443% ( 24) 00:09:51.260 12552.665 - 12603.077: 90.8993% ( 26) 00:09:51.260 12603.077 - 12653.489: 91.0246% ( 21) 00:09:51.260 12653.489 - 12703.902: 91.1677% ( 24) 00:09:51.260 12703.902 - 12754.314: 91.2750% ( 18) 00:09:51.260 12754.314 - 12804.726: 91.4122% ( 23) 00:09:51.260 12804.726 - 12855.138: 91.5553% ( 24) 00:09:51.260 12855.138 - 12905.551: 91.7044% ( 25) 00:09:51.260 12905.551 - 13006.375: 91.9728% ( 45) 00:09:51.260 13006.375 - 13107.200: 92.2650% ( 49) 00:09:51.260 13107.200 - 13208.025: 92.5632% ( 50) 00:09:51.260 13208.025 - 13308.849: 92.8137% ( 42) 00:09:51.260 13308.849 - 13409.674: 93.0761% ( 44) 00:09:51.260 13409.674 - 13510.498: 93.3504% ( 46) 00:09:51.260 13510.498 - 13611.323: 93.6188% ( 45) 00:09:51.260 13611.323 - 13712.148: 93.9170% ( 50) 00:09:51.260 13712.148 - 13812.972: 94.1615% ( 41) 00:09:51.260 13812.972 - 13913.797: 94.4478% ( 48) 00:09:51.260 13913.797 - 14014.622: 94.7817% ( 56) 00:09:51.260 14014.622 - 14115.446: 95.0859% ( 51) 00:09:51.260 14115.446 - 14216.271: 95.3841% ( 50) 00:09:51.260 14216.271 - 14317.095: 95.6584% ( 46) 00:09:51.260 14317.095 - 14417.920: 95.8671% ( 35) 00:09:51.260 14417.920 - 14518.745: 96.0460% ( 30) 00:09:51.260 14518.745 - 14619.569: 96.2309% ( 31) 00:09:51.260 14619.569 - 14720.394: 96.4039% ( 29) 00:09:51.260 14720.394 - 14821.218: 96.5708% ( 28) 00:09:51.260 14821.218 - 14922.043: 96.7557% ( 31) 00:09:51.260 14922.043 - 15022.868: 96.9287% ( 29) 00:09:51.260 15022.868 - 15123.692: 97.0957% ( 28) 00:09:51.260 15123.692 - 15224.517: 97.2686% ( 29) 00:09:51.260 15224.517 - 15325.342: 97.4296% ( 27) 00:09:51.260 15325.342 - 15426.166: 97.5489% ( 20) 00:09:51.260 15426.166 - 15526.991: 97.6503% ( 17) 00:09:51.260 15526.991 - 15627.815: 97.7636% ( 19) 00:09:51.260 15627.815 - 15728.640: 97.8590% ( 16) 00:09:51.260 15728.640 - 15829.465: 97.9604% ( 17) 00:09:51.260 15829.465 - 15930.289: 98.0200% ( 10) 00:09:51.260 15930.289 - 16031.114: 98.0797% ( 10) 00:09:51.260 16031.114 - 16131.938: 98.1333% ( 9) 00:09:51.260 16131.938 - 16232.763: 98.2049% ( 12) 00:09:51.260 16232.763 - 16333.588: 98.2765% ( 12) 00:09:51.260 16333.588 - 16434.412: 98.3540% ( 13) 00:09:51.260 16434.412 - 16535.237: 98.4315% ( 13) 00:09:51.260 16535.237 - 16636.062: 98.5091% ( 13) 00:09:51.260 16636.062 - 16736.886: 98.5866% ( 13) 00:09:51.260 16736.886 - 16837.711: 98.6641% ( 13) 00:09:51.260 16837.711 - 16938.535: 98.7357% ( 12) 00:09:51.260 16938.535 - 17039.360: 98.8073% ( 12) 00:09:51.260 17039.360 - 17140.185: 98.8729% ( 11) 00:09:51.260 17140.185 - 17241.009: 98.9086% ( 6) 00:09:51.260 17241.009 - 17341.834: 98.9444% ( 6) 00:09:51.260 17341.834 - 17442.658: 98.9802% ( 6) 00:09:51.260 17442.658 - 17543.483: 99.0160% ( 6) 00:09:51.260 17543.483 - 17644.308: 99.0458% ( 5) 00:09:51.260 17644.308 - 17745.132: 99.0816% ( 6) 00:09:51.260 17745.132 - 17845.957: 99.1174% ( 6) 00:09:51.260 17845.957 - 17946.782: 99.1531% ( 6) 00:09:51.260 17946.782 - 18047.606: 99.1949% ( 7) 00:09:51.260 18047.606 - 18148.431: 99.2307% ( 6) 00:09:51.260 18148.431 - 18249.255: 99.2366% ( 1) 00:09:51.260 28835.840 - 29037.489: 99.2665% ( 5) 00:09:51.260 29037.489 - 29239.138: 99.3142% ( 8) 00:09:51.260 29239.138 - 29440.788: 99.3619% ( 8) 00:09:51.260 29440.788 - 29642.437: 99.4096% ( 8) 00:09:51.260 29642.437 - 29844.086: 99.4633% ( 9) 00:09:51.260 29844.086 - 30045.735: 99.5110% ( 8) 00:09:51.260 30045.735 - 30247.385: 99.5587% ( 8) 00:09:51.260 30247.385 - 30449.034: 99.6124% ( 9) 00:09:51.260 30449.034 - 30650.683: 99.6601% ( 8) 00:09:51.260 30650.683 - 30852.332: 99.7078% ( 8) 00:09:51.260 30852.332 - 31053.982: 99.7555% ( 8) 00:09:51.260 31053.982 - 31255.631: 99.8092% ( 9) 00:09:51.260 31255.631 - 31457.280: 99.8569% ( 8) 00:09:51.260 31457.280 - 31658.929: 99.9105% ( 9) 00:09:51.260 31658.929 - 31860.578: 99.9583% ( 8) 00:09:51.260 31860.578 - 32062.228: 100.0000% ( 7) 00:09:51.260 00:09:51.260 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:51.260 ============================================================================== 00:09:51.260 Range in us Cumulative IO count 00:09:51.260 4940.406 - 4965.612: 0.0119% ( 2) 00:09:51.260 4965.612 - 4990.818: 0.0477% ( 6) 00:09:51.260 4990.818 - 5016.025: 0.1073% ( 10) 00:09:51.260 5016.025 - 5041.231: 0.2207% ( 19) 00:09:51.260 5041.231 - 5066.437: 0.3340% ( 19) 00:09:51.260 5066.437 - 5091.643: 0.4473% ( 19) 00:09:51.260 5091.643 - 5116.849: 0.5844% ( 23) 00:09:51.260 5116.849 - 5142.055: 0.8409% ( 43) 00:09:51.261 5142.055 - 5167.262: 1.0735% ( 39) 00:09:51.261 5167.262 - 5192.468: 1.3776% ( 51) 00:09:51.261 5192.468 - 5217.674: 1.7056% ( 55) 00:09:51.261 5217.674 - 5242.880: 2.1171% ( 69) 00:09:51.261 5242.880 - 5268.086: 2.6896% ( 96) 00:09:51.261 5268.086 - 5293.292: 3.3158% ( 105) 00:09:51.261 5293.292 - 5318.498: 4.1090% ( 133) 00:09:51.261 5318.498 - 5343.705: 4.8485% ( 124) 00:09:51.261 5343.705 - 5368.911: 5.6536% ( 135) 00:09:51.261 5368.911 - 5394.117: 6.5601% ( 152) 00:09:51.261 5394.117 - 5419.323: 7.4487% ( 149) 00:09:51.261 5419.323 - 5444.529: 8.4208% ( 163) 00:09:51.261 5444.529 - 5469.735: 9.3631% ( 158) 00:09:51.261 5469.735 - 5494.942: 10.3650% ( 168) 00:09:51.261 5494.942 - 5520.148: 11.3967% ( 173) 00:09:51.261 5520.148 - 5545.354: 12.4165% ( 171) 00:09:51.261 5545.354 - 5570.560: 13.3946% ( 164) 00:09:51.261 5570.560 - 5595.766: 14.4501% ( 177) 00:09:51.261 5595.766 - 5620.972: 15.5296% ( 181) 00:09:51.261 5620.972 - 5646.178: 16.6329% ( 185) 00:09:51.261 5646.178 - 5671.385: 17.7302% ( 184) 00:09:51.261 5671.385 - 5696.591: 18.8335% ( 185) 00:09:51.261 5696.591 - 5721.797: 19.9666% ( 190) 00:09:51.261 5721.797 - 5747.003: 21.0818% ( 187) 00:09:51.261 5747.003 - 5772.209: 22.2149% ( 190) 00:09:51.261 5772.209 - 5797.415: 23.3421% ( 189) 00:09:51.261 5797.415 - 5822.622: 24.4812% ( 191) 00:09:51.261 5822.622 - 5847.828: 25.6023% ( 188) 00:09:51.261 5847.828 - 5873.034: 26.7235% ( 188) 00:09:51.261 5873.034 - 5898.240: 27.8805% ( 194) 00:09:51.261 5898.240 - 5923.446: 28.9719% ( 183) 00:09:51.261 5923.446 - 5948.652: 30.1169% ( 192) 00:09:51.261 5948.652 - 5973.858: 31.3216% ( 202) 00:09:51.261 5973.858 - 5999.065: 32.4487% ( 189) 00:09:51.261 5999.065 - 6024.271: 33.5759% ( 189) 00:09:51.261 6024.271 - 6049.477: 34.7269% ( 193) 00:09:51.261 6049.477 - 6074.683: 35.8659% ( 191) 00:09:51.261 6074.683 - 6099.889: 37.0229% ( 194) 00:09:51.261 6099.889 - 6125.095: 38.1500% ( 189) 00:09:51.261 6125.095 - 6150.302: 39.3189% ( 196) 00:09:51.261 6150.302 - 6175.508: 40.4461% ( 189) 00:09:51.261 6175.508 - 6200.714: 41.5852% ( 191) 00:09:51.261 6200.714 - 6225.920: 42.7123% ( 189) 00:09:51.261 6225.920 - 6251.126: 43.9170% ( 202) 00:09:51.261 6251.126 - 6276.332: 45.0680% ( 193) 00:09:51.261 6276.332 - 6301.538: 46.2190% ( 193) 00:09:51.261 6301.538 - 6326.745: 47.3581% ( 191) 00:09:51.261 6326.745 - 6351.951: 48.4673% ( 186) 00:09:51.261 6351.951 - 6377.157: 49.6243% ( 194) 00:09:51.261 6377.157 - 6402.363: 50.7455% ( 188) 00:09:51.261 6402.363 - 6427.569: 51.8547% ( 186) 00:09:51.261 6427.569 - 6452.775: 52.9878% ( 190) 00:09:51.261 6452.775 - 6503.188: 55.2421% ( 378) 00:09:51.261 6503.188 - 6553.600: 57.4249% ( 366) 00:09:51.261 6553.600 - 6604.012: 59.5420% ( 355) 00:09:51.261 6604.012 - 6654.425: 61.5518% ( 337) 00:09:51.261 6654.425 - 6704.837: 63.3469% ( 301) 00:09:51.261 6704.837 - 6755.249: 65.0227% ( 281) 00:09:51.261 6755.249 - 6805.662: 66.5494% ( 256) 00:09:51.261 6805.662 - 6856.074: 67.9449% ( 234) 00:09:51.261 6856.074 - 6906.486: 69.1734% ( 206) 00:09:51.261 6906.486 - 6956.898: 70.1574% ( 165) 00:09:51.261 6956.898 - 7007.311: 70.9387% ( 131) 00:09:51.261 7007.311 - 7057.723: 71.5649% ( 105) 00:09:51.261 7057.723 - 7108.135: 72.0479% ( 81) 00:09:51.261 7108.135 - 7158.548: 72.4893% ( 74) 00:09:51.261 7158.548 - 7208.960: 72.9008% ( 69) 00:09:51.261 7208.960 - 7259.372: 73.2884% ( 65) 00:09:51.261 7259.372 - 7309.785: 73.6343% ( 58) 00:09:51.261 7309.785 - 7360.197: 73.9862% ( 59) 00:09:51.261 7360.197 - 7410.609: 74.3022% ( 53) 00:09:51.261 7410.609 - 7461.022: 74.6004% ( 50) 00:09:51.261 7461.022 - 7511.434: 74.8807% ( 47) 00:09:51.261 7511.434 - 7561.846: 75.1491% ( 45) 00:09:51.261 7561.846 - 7612.258: 75.4532% ( 51) 00:09:51.261 7612.258 - 7662.671: 75.7276% ( 46) 00:09:51.261 7662.671 - 7713.083: 75.9840% ( 43) 00:09:51.261 7713.083 - 7763.495: 76.2047% ( 37) 00:09:51.261 7763.495 - 7813.908: 76.4015% ( 33) 00:09:51.261 7813.908 - 7864.320: 76.5685% ( 28) 00:09:51.261 7864.320 - 7914.732: 76.7533% ( 31) 00:09:51.261 7914.732 - 7965.145: 76.9024% ( 25) 00:09:51.261 7965.145 - 8015.557: 77.0515% ( 25) 00:09:51.261 8015.557 - 8065.969: 77.2066% ( 26) 00:09:51.261 8065.969 - 8116.382: 77.3557% ( 25) 00:09:51.261 8116.382 - 8166.794: 77.4928% ( 23) 00:09:51.261 8166.794 - 8217.206: 77.6300% ( 23) 00:09:51.261 8217.206 - 8267.618: 77.7672% ( 23) 00:09:51.261 8267.618 - 8318.031: 77.8924% ( 21) 00:09:51.261 8318.031 - 8368.443: 78.0177% ( 21) 00:09:51.261 8368.443 - 8418.855: 78.1489% ( 22) 00:09:51.261 8418.855 - 8469.268: 78.2741% ( 21) 00:09:51.261 8469.268 - 8519.680: 78.3934% ( 20) 00:09:51.261 8519.680 - 8570.092: 78.5067% ( 19) 00:09:51.261 8570.092 - 8620.505: 78.6260% ( 20) 00:09:51.261 8620.505 - 8670.917: 78.7393% ( 19) 00:09:51.261 8670.917 - 8721.329: 78.8406% ( 17) 00:09:51.261 8721.329 - 8771.742: 78.9540% ( 19) 00:09:51.261 8771.742 - 8822.154: 79.0553% ( 17) 00:09:51.261 8822.154 - 8872.566: 79.1627% ( 18) 00:09:51.261 8872.566 - 8922.978: 79.2820% ( 20) 00:09:51.261 8922.978 - 8973.391: 79.4012% ( 20) 00:09:51.261 8973.391 - 9023.803: 79.5086% ( 18) 00:09:51.261 9023.803 - 9074.215: 79.6458% ( 23) 00:09:51.261 9074.215 - 9124.628: 79.7770% ( 22) 00:09:51.261 9124.628 - 9175.040: 79.8843% ( 18) 00:09:51.261 9175.040 - 9225.452: 79.9857% ( 17) 00:09:51.261 9225.452 - 9275.865: 80.0930% ( 18) 00:09:51.261 9275.865 - 9326.277: 80.2183% ( 21) 00:09:51.261 9326.277 - 9376.689: 80.3853% ( 28) 00:09:51.261 9376.689 - 9427.102: 80.5105% ( 21) 00:09:51.261 9427.102 - 9477.514: 80.6238% ( 19) 00:09:51.261 9477.514 - 9527.926: 80.7371% ( 19) 00:09:51.261 9527.926 - 9578.338: 80.8564% ( 20) 00:09:51.261 9578.338 - 9628.751: 80.9816% ( 21) 00:09:51.261 9628.751 - 9679.163: 81.1069% ( 21) 00:09:51.261 9679.163 - 9729.575: 81.2321% ( 21) 00:09:51.261 9729.575 - 9779.988: 81.3693% ( 23) 00:09:51.261 9779.988 - 9830.400: 81.5064% ( 23) 00:09:51.261 9830.400 - 9880.812: 81.6615% ( 26) 00:09:51.261 9880.812 - 9931.225: 81.8046% ( 24) 00:09:51.261 9931.225 - 9981.637: 81.9656% ( 27) 00:09:51.261 9981.637 - 10032.049: 82.1267% ( 27) 00:09:51.261 10032.049 - 10082.462: 82.2698% ( 24) 00:09:51.261 10082.462 - 10132.874: 82.4547% ( 31) 00:09:51.261 10132.874 - 10183.286: 82.6157% ( 27) 00:09:51.261 10183.286 - 10233.698: 82.7827% ( 28) 00:09:51.261 10233.698 - 10284.111: 82.9556% ( 29) 00:09:51.261 10284.111 - 10334.523: 83.1167% ( 27) 00:09:51.261 10334.523 - 10384.935: 83.2956% ( 30) 00:09:51.261 10384.935 - 10435.348: 83.4864% ( 32) 00:09:51.261 10435.348 - 10485.760: 83.6772% ( 32) 00:09:51.261 10485.760 - 10536.172: 83.9039% ( 38) 00:09:51.261 10536.172 - 10586.585: 84.1007% ( 33) 00:09:51.261 10586.585 - 10636.997: 84.3094% ( 35) 00:09:51.261 10636.997 - 10687.409: 84.5002% ( 32) 00:09:51.261 10687.409 - 10737.822: 84.7149% ( 36) 00:09:51.261 10737.822 - 10788.234: 84.8998% ( 31) 00:09:51.261 10788.234 - 10838.646: 85.0847% ( 31) 00:09:51.261 10838.646 - 10889.058: 85.2576% ( 29) 00:09:51.261 10889.058 - 10939.471: 85.4187% ( 27) 00:09:51.261 10939.471 - 10989.883: 85.5976% ( 30) 00:09:51.261 10989.883 - 11040.295: 85.7705% ( 29) 00:09:51.261 11040.295 - 11090.708: 85.9494% ( 30) 00:09:51.261 11090.708 - 11141.120: 86.0985% ( 25) 00:09:51.261 11141.120 - 11191.532: 86.2536% ( 26) 00:09:51.261 11191.532 - 11241.945: 86.4086% ( 26) 00:09:51.261 11241.945 - 11292.357: 86.5816% ( 29) 00:09:51.261 11292.357 - 11342.769: 86.7307% ( 25) 00:09:51.261 11342.769 - 11393.182: 86.9036% ( 29) 00:09:51.261 11393.182 - 11443.594: 87.0587% ( 26) 00:09:51.261 11443.594 - 11494.006: 87.2078% ( 25) 00:09:51.261 11494.006 - 11544.418: 87.3867% ( 30) 00:09:51.261 11544.418 - 11594.831: 87.5716% ( 31) 00:09:51.261 11594.831 - 11645.243: 87.7505% ( 30) 00:09:51.261 11645.243 - 11695.655: 87.9175% ( 28) 00:09:51.261 11695.655 - 11746.068: 88.0904% ( 29) 00:09:51.261 11746.068 - 11796.480: 88.2753% ( 31) 00:09:51.261 11796.480 - 11846.892: 88.4661% ( 32) 00:09:51.261 11846.892 - 11897.305: 88.6808% ( 36) 00:09:51.261 11897.305 - 11947.717: 88.8657% ( 31) 00:09:51.261 11947.717 - 11998.129: 89.0506% ( 31) 00:09:51.261 11998.129 - 12048.542: 89.2653% ( 36) 00:09:51.261 12048.542 - 12098.954: 89.4501% ( 31) 00:09:51.261 12098.954 - 12149.366: 89.6469% ( 33) 00:09:51.261 12149.366 - 12199.778: 89.8199% ( 29) 00:09:51.261 12199.778 - 12250.191: 89.9988% ( 30) 00:09:51.261 12250.191 - 12300.603: 90.1837% ( 31) 00:09:51.261 12300.603 - 12351.015: 90.3328% ( 25) 00:09:51.261 12351.015 - 12401.428: 90.4759% ( 24) 00:09:51.261 12401.428 - 12451.840: 90.6310% ( 26) 00:09:51.261 12451.840 - 12502.252: 90.7860% ( 26) 00:09:51.261 12502.252 - 12552.665: 90.9411% ( 26) 00:09:51.261 12552.665 - 12603.077: 91.0723% ( 22) 00:09:51.261 12603.077 - 12653.489: 91.2452% ( 29) 00:09:51.261 12653.489 - 12703.902: 91.3884% ( 24) 00:09:51.261 12703.902 - 12754.314: 91.5375% ( 25) 00:09:51.261 12754.314 - 12804.726: 91.6865% ( 25) 00:09:51.261 12804.726 - 12855.138: 91.8416% ( 26) 00:09:51.261 12855.138 - 12905.551: 92.0324% ( 32) 00:09:51.261 12905.551 - 13006.375: 92.3962% ( 61) 00:09:51.261 13006.375 - 13107.200: 92.7481% ( 59) 00:09:51.261 13107.200 - 13208.025: 93.1178% ( 62) 00:09:51.261 13208.025 - 13308.849: 93.5115% ( 66) 00:09:51.261 13308.849 - 13409.674: 93.8752% ( 61) 00:09:51.261 13409.674 - 13510.498: 94.1854% ( 52) 00:09:51.261 13510.498 - 13611.323: 94.4835% ( 50) 00:09:51.261 13611.323 - 13712.148: 94.7937% ( 52) 00:09:51.261 13712.148 - 13812.972: 95.0918% ( 50) 00:09:51.261 13812.972 - 13913.797: 95.3781% ( 48) 00:09:51.261 13913.797 - 14014.622: 95.6524% ( 46) 00:09:51.262 14014.622 - 14115.446: 95.9148% ( 44) 00:09:51.262 14115.446 - 14216.271: 96.1713% ( 43) 00:09:51.262 14216.271 - 14317.095: 96.3979% ( 38) 00:09:51.262 14317.095 - 14417.920: 96.5828% ( 31) 00:09:51.262 14417.920 - 14518.745: 96.7617% ( 30) 00:09:51.262 14518.745 - 14619.569: 96.9406% ( 30) 00:09:51.262 14619.569 - 14720.394: 97.1076% ( 28) 00:09:51.262 14720.394 - 14821.218: 97.2805% ( 29) 00:09:51.262 14821.218 - 14922.043: 97.4475% ( 28) 00:09:51.262 14922.043 - 15022.868: 97.5966% ( 25) 00:09:51.262 15022.868 - 15123.692: 97.7219% ( 21) 00:09:51.262 15123.692 - 15224.517: 97.8352% ( 19) 00:09:51.262 15224.517 - 15325.342: 97.9365% ( 17) 00:09:51.262 15325.342 - 15426.166: 98.0200% ( 14) 00:09:51.262 15426.166 - 15526.991: 98.1035% ( 14) 00:09:51.262 15526.991 - 15627.815: 98.1811% ( 13) 00:09:51.262 15627.815 - 15728.640: 98.2586% ( 13) 00:09:51.262 15728.640 - 15829.465: 98.3302% ( 12) 00:09:51.262 15829.465 - 15930.289: 98.4017% ( 12) 00:09:51.262 15930.289 - 16031.114: 98.4435% ( 7) 00:09:51.262 16031.114 - 16131.938: 98.4733% ( 5) 00:09:51.262 17140.185 - 17241.009: 98.4792% ( 1) 00:09:51.262 17241.009 - 17341.834: 98.5150% ( 6) 00:09:51.262 17341.834 - 17442.658: 98.5568% ( 7) 00:09:51.262 17442.658 - 17543.483: 98.5926% ( 6) 00:09:51.262 17543.483 - 17644.308: 98.6224% ( 5) 00:09:51.262 17644.308 - 17745.132: 98.6582% ( 6) 00:09:51.262 17745.132 - 17845.957: 98.6939% ( 6) 00:09:51.262 17845.957 - 17946.782: 98.7357% ( 7) 00:09:51.262 17946.782 - 18047.606: 98.7774% ( 7) 00:09:51.262 18047.606 - 18148.431: 98.8132% ( 6) 00:09:51.262 18148.431 - 18249.255: 98.8550% ( 7) 00:09:51.262 18249.255 - 18350.080: 98.8907% ( 6) 00:09:51.262 18350.080 - 18450.905: 98.9325% ( 7) 00:09:51.262 18450.905 - 18551.729: 98.9742% ( 7) 00:09:51.262 18551.729 - 18652.554: 99.0160% ( 7) 00:09:51.262 18652.554 - 18753.378: 99.0518% ( 6) 00:09:51.262 18753.378 - 18854.203: 99.0875% ( 6) 00:09:51.262 18854.203 - 18955.028: 99.1174% ( 5) 00:09:51.262 18955.028 - 19055.852: 99.1531% ( 6) 00:09:51.262 19055.852 - 19156.677: 99.1949% ( 7) 00:09:51.262 19156.677 - 19257.502: 99.2307% ( 6) 00:09:51.262 19257.502 - 19358.326: 99.2366% ( 1) 00:09:51.262 30650.683 - 30852.332: 99.2605% ( 4) 00:09:51.262 30852.332 - 31053.982: 99.3142% ( 9) 00:09:51.262 31053.982 - 31255.631: 99.3619% ( 8) 00:09:51.262 31255.631 - 31457.280: 99.4096% ( 8) 00:09:51.262 31457.280 - 31658.929: 99.4573% ( 8) 00:09:51.262 31658.929 - 31860.578: 99.4990% ( 7) 00:09:51.262 31860.578 - 32062.228: 99.5468% ( 8) 00:09:51.262 32062.228 - 32263.877: 99.5945% ( 8) 00:09:51.262 32263.877 - 32465.526: 99.6481% ( 9) 00:09:51.262 32465.526 - 32667.175: 99.6899% ( 7) 00:09:51.262 32667.175 - 32868.825: 99.7436% ( 9) 00:09:51.262 32868.825 - 33070.474: 99.7913% ( 8) 00:09:51.262 33070.474 - 33272.123: 99.8390% ( 8) 00:09:51.262 33272.123 - 33473.772: 99.8867% ( 8) 00:09:51.262 33473.772 - 33675.422: 99.9344% ( 8) 00:09:51.262 33675.422 - 33877.071: 99.9821% ( 8) 00:09:51.262 33877.071 - 34078.720: 100.0000% ( 3) 00:09:51.262 00:09:51.262 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:51.262 ============================================================================== 00:09:51.262 Range in us Cumulative IO count 00:09:51.262 5041.231 - 5066.437: 0.0656% ( 11) 00:09:51.262 5066.437 - 5091.643: 0.1670% ( 17) 00:09:51.262 5091.643 - 5116.849: 0.2743% ( 18) 00:09:51.262 5116.849 - 5142.055: 0.5308% ( 43) 00:09:51.262 5142.055 - 5167.262: 0.7932% ( 44) 00:09:51.262 5167.262 - 5192.468: 1.0735% ( 47) 00:09:51.262 5192.468 - 5217.674: 1.3896% ( 53) 00:09:51.262 5217.674 - 5242.880: 1.8547% ( 78) 00:09:51.262 5242.880 - 5268.086: 2.3676% ( 86) 00:09:51.262 5268.086 - 5293.292: 2.9640% ( 100) 00:09:51.262 5293.292 - 5318.498: 3.6558% ( 116) 00:09:51.262 5318.498 - 5343.705: 4.5265% ( 146) 00:09:51.262 5343.705 - 5368.911: 5.4509% ( 155) 00:09:51.262 5368.911 - 5394.117: 6.3216% ( 146) 00:09:51.262 5394.117 - 5419.323: 7.2758% ( 160) 00:09:51.262 5419.323 - 5444.529: 8.2479% ( 163) 00:09:51.262 5444.529 - 5469.735: 9.2140% ( 162) 00:09:51.262 5469.735 - 5494.942: 10.1741% ( 161) 00:09:51.262 5494.942 - 5520.148: 11.2357% ( 178) 00:09:51.262 5520.148 - 5545.354: 12.3032% ( 179) 00:09:51.262 5545.354 - 5570.560: 13.3469% ( 175) 00:09:51.262 5570.560 - 5595.766: 14.3726% ( 172) 00:09:51.262 5595.766 - 5620.972: 15.3924% ( 171) 00:09:51.262 5620.972 - 5646.178: 16.5017% ( 186) 00:09:51.262 5646.178 - 5671.385: 17.5871% ( 182) 00:09:51.262 5671.385 - 5696.591: 18.5890% ( 168) 00:09:51.262 5696.591 - 5721.797: 19.6565% ( 179) 00:09:51.262 5721.797 - 5747.003: 20.7180% ( 178) 00:09:51.262 5747.003 - 5772.209: 21.7736% ( 177) 00:09:51.262 5772.209 - 5797.415: 22.8471% ( 180) 00:09:51.262 5797.415 - 5822.622: 23.8550% ( 169) 00:09:51.262 5822.622 - 5847.828: 24.9523% ( 184) 00:09:51.262 5847.828 - 5873.034: 26.0258% ( 180) 00:09:51.262 5873.034 - 5898.240: 27.0992% ( 180) 00:09:51.262 5898.240 - 5923.446: 28.1608% ( 178) 00:09:51.262 5923.446 - 5948.652: 29.2402% ( 181) 00:09:51.262 5948.652 - 5973.858: 30.3554% ( 187) 00:09:51.262 5973.858 - 5999.065: 31.4468% ( 183) 00:09:51.262 5999.065 - 6024.271: 32.5740% ( 189) 00:09:51.262 6024.271 - 6049.477: 33.6832% ( 186) 00:09:51.262 6049.477 - 6074.683: 34.8342% ( 193) 00:09:51.262 6074.683 - 6099.889: 35.9912% ( 194) 00:09:51.262 6099.889 - 6125.095: 37.1124% ( 188) 00:09:51.262 6125.095 - 6150.302: 38.2097% ( 184) 00:09:51.262 6150.302 - 6175.508: 39.3488% ( 191) 00:09:51.262 6175.508 - 6200.714: 40.4759% ( 189) 00:09:51.262 6200.714 - 6225.920: 41.6329% ( 194) 00:09:51.262 6225.920 - 6251.126: 42.7600% ( 189) 00:09:51.262 6251.126 - 6276.332: 43.9408% ( 198) 00:09:51.262 6276.332 - 6301.538: 45.0918% ( 193) 00:09:51.262 6301.538 - 6326.745: 46.2428% ( 193) 00:09:51.262 6326.745 - 6351.951: 47.3998% ( 194) 00:09:51.262 6351.951 - 6377.157: 48.5329% ( 190) 00:09:51.262 6377.157 - 6402.363: 49.7197% ( 199) 00:09:51.262 6402.363 - 6427.569: 50.8528% ( 190) 00:09:51.262 6427.569 - 6452.775: 51.9561% ( 185) 00:09:51.262 6452.775 - 6503.188: 54.2581% ( 386) 00:09:51.262 6503.188 - 6553.600: 56.4647% ( 370) 00:09:51.262 6553.600 - 6604.012: 58.6355% ( 364) 00:09:51.262 6604.012 - 6654.425: 60.7049% ( 347) 00:09:51.262 6654.425 - 6704.837: 62.6133% ( 320) 00:09:51.262 6704.837 - 6755.249: 64.3786% ( 296) 00:09:51.262 6755.249 - 6805.662: 65.9470% ( 263) 00:09:51.262 6805.662 - 6856.074: 67.3068% ( 228) 00:09:51.262 6856.074 - 6906.486: 68.4339% ( 189) 00:09:51.262 6906.486 - 6956.898: 69.3941% ( 161) 00:09:51.262 6956.898 - 7007.311: 70.1574% ( 128) 00:09:51.262 7007.311 - 7057.723: 70.7717% ( 103) 00:09:51.262 7057.723 - 7108.135: 71.2906% ( 87) 00:09:51.262 7108.135 - 7158.548: 71.7975% ( 85) 00:09:51.262 7158.548 - 7208.960: 72.2209% ( 71) 00:09:51.262 7208.960 - 7259.372: 72.6026% ( 64) 00:09:51.262 7259.372 - 7309.785: 72.9544% ( 59) 00:09:51.262 7309.785 - 7360.197: 73.3242% ( 62) 00:09:51.262 7360.197 - 7410.609: 73.6820% ( 60) 00:09:51.262 7410.609 - 7461.022: 74.0219% ( 57) 00:09:51.262 7461.022 - 7511.434: 74.3440% ( 54) 00:09:51.262 7511.434 - 7561.846: 74.6183% ( 46) 00:09:51.262 7561.846 - 7612.258: 74.8867% ( 45) 00:09:51.262 7612.258 - 7662.671: 75.1610% ( 46) 00:09:51.262 7662.671 - 7713.083: 75.4234% ( 44) 00:09:51.262 7713.083 - 7763.495: 75.6560% ( 39) 00:09:51.262 7763.495 - 7813.908: 75.9005% ( 41) 00:09:51.262 7813.908 - 7864.320: 76.1212% ( 37) 00:09:51.262 7864.320 - 7914.732: 76.3657% ( 41) 00:09:51.262 7914.732 - 7965.145: 76.5744% ( 35) 00:09:51.262 7965.145 - 8015.557: 76.8010% ( 38) 00:09:51.262 8015.557 - 8065.969: 77.0038% ( 34) 00:09:51.262 8065.969 - 8116.382: 77.2125% ( 35) 00:09:51.262 8116.382 - 8166.794: 77.3915% ( 30) 00:09:51.262 8166.794 - 8217.206: 77.5704% ( 30) 00:09:51.262 8217.206 - 8267.618: 77.7552% ( 31) 00:09:51.262 8267.618 - 8318.031: 77.9282% ( 29) 00:09:51.262 8318.031 - 8368.443: 78.0654% ( 23) 00:09:51.262 8368.443 - 8418.855: 78.1846% ( 20) 00:09:51.262 8418.855 - 8469.268: 78.3278% ( 24) 00:09:51.263 8469.268 - 8519.680: 78.4590% ( 22) 00:09:51.263 8519.680 - 8570.092: 78.6021% ( 24) 00:09:51.263 8570.092 - 8620.505: 78.7214% ( 20) 00:09:51.263 8620.505 - 8670.917: 78.8406% ( 20) 00:09:51.263 8670.917 - 8721.329: 78.9719% ( 22) 00:09:51.263 8721.329 - 8771.742: 79.0852% ( 19) 00:09:51.263 8771.742 - 8822.154: 79.2164% ( 22) 00:09:51.263 8822.154 - 8872.566: 79.3356% ( 20) 00:09:51.263 8872.566 - 8922.978: 79.4490% ( 19) 00:09:51.263 8922.978 - 8973.391: 79.5861% ( 23) 00:09:51.263 8973.391 - 9023.803: 79.7114% ( 21) 00:09:51.263 9023.803 - 9074.215: 79.8247% ( 19) 00:09:51.263 9074.215 - 9124.628: 79.9976% ( 29) 00:09:51.263 9124.628 - 9175.040: 80.1229% ( 21) 00:09:51.263 9175.040 - 9225.452: 80.2481% ( 21) 00:09:51.263 9225.452 - 9275.865: 80.3435% ( 16) 00:09:51.263 9275.865 - 9326.277: 80.4926% ( 25) 00:09:51.263 9326.277 - 9376.689: 80.6477% ( 26) 00:09:51.263 9376.689 - 9427.102: 80.8206% ( 29) 00:09:51.263 9427.102 - 9477.514: 80.9816% ( 27) 00:09:51.263 9477.514 - 9527.926: 81.1486% ( 28) 00:09:51.263 9527.926 - 9578.338: 81.2917% ( 24) 00:09:51.263 9578.338 - 9628.751: 81.4289% ( 23) 00:09:51.263 9628.751 - 9679.163: 81.5542% ( 21) 00:09:51.263 9679.163 - 9729.575: 81.6854% ( 22) 00:09:51.263 9729.575 - 9779.988: 81.8344% ( 25) 00:09:51.263 9779.988 - 9830.400: 81.9776% ( 24) 00:09:51.263 9830.400 - 9880.812: 82.1207% ( 24) 00:09:51.263 9880.812 - 9931.225: 82.2758% ( 26) 00:09:51.263 9931.225 - 9981.637: 82.4308% ( 26) 00:09:51.263 9981.637 - 10032.049: 82.6217% ( 32) 00:09:51.263 10032.049 - 10082.462: 82.7886% ( 28) 00:09:51.263 10082.462 - 10132.874: 82.9377% ( 25) 00:09:51.263 10132.874 - 10183.286: 83.0868% ( 25) 00:09:51.263 10183.286 - 10233.698: 83.2657% ( 30) 00:09:51.263 10233.698 - 10284.111: 83.4625% ( 33) 00:09:51.263 10284.111 - 10334.523: 83.6474% ( 31) 00:09:51.263 10334.523 - 10384.935: 83.8263% ( 30) 00:09:51.263 10384.935 - 10435.348: 84.0231% ( 33) 00:09:51.263 10435.348 - 10485.760: 84.1961% ( 29) 00:09:51.263 10485.760 - 10536.172: 84.3511% ( 26) 00:09:51.263 10536.172 - 10586.585: 84.5181% ( 28) 00:09:51.263 10586.585 - 10636.997: 84.6970% ( 30) 00:09:51.263 10636.997 - 10687.409: 84.8819% ( 31) 00:09:51.263 10687.409 - 10737.822: 85.0787% ( 33) 00:09:51.263 10737.822 - 10788.234: 85.2457% ( 28) 00:09:51.263 10788.234 - 10838.646: 85.4246% ( 30) 00:09:51.263 10838.646 - 10889.058: 85.5737% ( 25) 00:09:51.263 10889.058 - 10939.471: 85.7347% ( 27) 00:09:51.263 10939.471 - 10989.883: 85.8838% ( 25) 00:09:51.263 10989.883 - 11040.295: 86.0747% ( 32) 00:09:51.263 11040.295 - 11090.708: 86.2476% ( 29) 00:09:51.263 11090.708 - 11141.120: 86.3967% ( 25) 00:09:51.263 11141.120 - 11191.532: 86.5458% ( 25) 00:09:51.263 11191.532 - 11241.945: 86.7188% ( 29) 00:09:51.263 11241.945 - 11292.357: 86.8917% ( 29) 00:09:51.263 11292.357 - 11342.769: 87.1183% ( 38) 00:09:51.263 11342.769 - 11393.182: 87.3449% ( 38) 00:09:51.263 11393.182 - 11443.594: 87.5537% ( 35) 00:09:51.263 11443.594 - 11494.006: 87.7684% ( 36) 00:09:51.263 11494.006 - 11544.418: 87.9413% ( 29) 00:09:51.263 11544.418 - 11594.831: 88.1143% ( 29) 00:09:51.263 11594.831 - 11645.243: 88.2812% ( 28) 00:09:51.263 11645.243 - 11695.655: 88.4482% ( 28) 00:09:51.263 11695.655 - 11746.068: 88.6093% ( 27) 00:09:51.263 11746.068 - 11796.480: 88.7643% ( 26) 00:09:51.263 11796.480 - 11846.892: 88.9074% ( 24) 00:09:51.263 11846.892 - 11897.305: 89.0446% ( 23) 00:09:51.263 11897.305 - 11947.717: 89.1639% ( 20) 00:09:51.263 11947.717 - 11998.129: 89.2712% ( 18) 00:09:51.263 11998.129 - 12048.542: 89.3845% ( 19) 00:09:51.263 12048.542 - 12098.954: 89.4919% ( 18) 00:09:51.263 12098.954 - 12149.366: 89.6589% ( 28) 00:09:51.263 12149.366 - 12199.778: 89.8378% ( 30) 00:09:51.263 12199.778 - 12250.191: 90.0286% ( 32) 00:09:51.263 12250.191 - 12300.603: 90.2314% ( 34) 00:09:51.263 12300.603 - 12351.015: 90.4222% ( 32) 00:09:51.263 12351.015 - 12401.428: 90.6190% ( 33) 00:09:51.263 12401.428 - 12451.840: 90.8099% ( 32) 00:09:51.263 12451.840 - 12502.252: 91.0126% ( 34) 00:09:51.263 12502.252 - 12552.665: 91.2094% ( 33) 00:09:51.263 12552.665 - 12603.077: 91.4062% ( 33) 00:09:51.263 12603.077 - 12653.489: 91.6031% ( 33) 00:09:51.263 12653.489 - 12703.902: 91.7999% ( 33) 00:09:51.263 12703.902 - 12754.314: 92.0384% ( 40) 00:09:51.263 12754.314 - 12804.726: 92.2412% ( 34) 00:09:51.263 12804.726 - 12855.138: 92.4320% ( 32) 00:09:51.263 12855.138 - 12905.551: 92.6348% ( 34) 00:09:51.263 12905.551 - 13006.375: 93.0224% ( 65) 00:09:51.263 13006.375 - 13107.200: 93.3981% ( 63) 00:09:51.263 13107.200 - 13208.025: 93.7321% ( 56) 00:09:51.263 13208.025 - 13308.849: 94.0780% ( 58) 00:09:51.263 13308.849 - 13409.674: 94.3822% ( 51) 00:09:51.263 13409.674 - 13510.498: 94.6684% ( 48) 00:09:51.263 13510.498 - 13611.323: 94.9785% ( 52) 00:09:51.263 13611.323 - 13712.148: 95.2886% ( 52) 00:09:51.263 13712.148 - 13812.972: 95.6167% ( 55) 00:09:51.263 13812.972 - 13913.797: 95.9387% ( 54) 00:09:51.263 13913.797 - 14014.622: 96.2309% ( 49) 00:09:51.263 14014.622 - 14115.446: 96.5291% ( 50) 00:09:51.263 14115.446 - 14216.271: 96.7557% ( 38) 00:09:51.263 14216.271 - 14317.095: 96.9406% ( 31) 00:09:51.263 14317.095 - 14417.920: 97.1076% ( 28) 00:09:51.263 14417.920 - 14518.745: 97.2865% ( 30) 00:09:51.263 14518.745 - 14619.569: 97.4714% ( 31) 00:09:51.263 14619.569 - 14720.394: 97.6264% ( 26) 00:09:51.263 14720.394 - 14821.218: 97.7636% ( 23) 00:09:51.263 14821.218 - 14922.043: 97.8769% ( 19) 00:09:51.263 14922.043 - 15022.868: 97.9783% ( 17) 00:09:51.263 15022.868 - 15123.692: 98.0558% ( 13) 00:09:51.263 15123.692 - 15224.517: 98.1274% ( 12) 00:09:51.263 15224.517 - 15325.342: 98.2109% ( 14) 00:09:51.263 15325.342 - 15426.166: 98.2824% ( 12) 00:09:51.263 15426.166 - 15526.991: 98.3480% ( 11) 00:09:51.263 15526.991 - 15627.815: 98.4136% ( 11) 00:09:51.263 15627.815 - 15728.640: 98.4673% ( 9) 00:09:51.263 15728.640 - 15829.465: 98.4733% ( 1) 00:09:51.263 17241.009 - 17341.834: 98.4792% ( 1) 00:09:51.263 17341.834 - 17442.658: 98.4971% ( 3) 00:09:51.263 17442.658 - 17543.483: 98.5210% ( 4) 00:09:51.263 17543.483 - 17644.308: 98.5448% ( 4) 00:09:51.263 17644.308 - 17745.132: 98.5687% ( 4) 00:09:51.263 17745.132 - 17845.957: 98.5926% ( 4) 00:09:51.263 17845.957 - 17946.782: 98.6164% ( 4) 00:09:51.263 17946.782 - 18047.606: 98.6522% ( 6) 00:09:51.263 18047.606 - 18148.431: 98.6880% ( 6) 00:09:51.263 18148.431 - 18249.255: 98.7297% ( 7) 00:09:51.263 18249.255 - 18350.080: 98.7595% ( 5) 00:09:51.263 18350.080 - 18450.905: 98.8013% ( 7) 00:09:51.263 18450.905 - 18551.729: 98.8371% ( 6) 00:09:51.263 18551.729 - 18652.554: 98.8669% ( 5) 00:09:51.263 18652.554 - 18753.378: 98.9027% ( 6) 00:09:51.263 18753.378 - 18854.203: 98.9385% ( 6) 00:09:51.263 18854.203 - 18955.028: 98.9683% ( 5) 00:09:51.263 18955.028 - 19055.852: 99.0100% ( 7) 00:09:51.263 19055.852 - 19156.677: 99.0458% ( 6) 00:09:51.263 19156.677 - 19257.502: 99.0816% ( 6) 00:09:51.263 19257.502 - 19358.326: 99.1233% ( 7) 00:09:51.263 19358.326 - 19459.151: 99.1591% ( 6) 00:09:51.263 19459.151 - 19559.975: 99.1949% ( 6) 00:09:51.263 19559.975 - 19660.800: 99.2307% ( 6) 00:09:51.263 19660.800 - 19761.625: 99.2366% ( 1) 00:09:51.263 30852.332 - 31053.982: 99.2784% ( 7) 00:09:51.263 31053.982 - 31255.631: 99.3261% ( 8) 00:09:51.263 31255.631 - 31457.280: 99.3738% ( 8) 00:09:51.263 31457.280 - 31658.929: 99.4215% ( 8) 00:09:51.263 31658.929 - 31860.578: 99.4752% ( 9) 00:09:51.263 31860.578 - 32062.228: 99.5229% ( 8) 00:09:51.263 32062.228 - 32263.877: 99.5706% ( 8) 00:09:51.263 32263.877 - 32465.526: 99.6183% ( 8) 00:09:51.263 32465.526 - 32667.175: 99.6720% ( 9) 00:09:51.263 32667.175 - 32868.825: 99.7197% ( 8) 00:09:51.263 32868.825 - 33070.474: 99.7734% ( 9) 00:09:51.263 33070.474 - 33272.123: 99.8211% ( 8) 00:09:51.263 33272.123 - 33473.772: 99.8688% ( 8) 00:09:51.263 33473.772 - 33675.422: 99.9225% ( 9) 00:09:51.263 33675.422 - 33877.071: 99.9702% ( 8) 00:09:51.263 33877.071 - 34078.720: 100.0000% ( 5) 00:09:51.263 00:09:51.263 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:51.263 ============================================================================== 00:09:51.263 Range in us Cumulative IO count 00:09:51.263 4965.612 - 4990.818: 0.0119% ( 2) 00:09:51.263 4990.818 - 5016.025: 0.0477% ( 6) 00:09:51.263 5016.025 - 5041.231: 0.0895% ( 7) 00:09:51.263 5041.231 - 5066.437: 0.1491% ( 10) 00:09:51.263 5066.437 - 5091.643: 0.2624% ( 19) 00:09:51.263 5091.643 - 5116.849: 0.4294% ( 28) 00:09:51.263 5116.849 - 5142.055: 0.5785% ( 25) 00:09:51.263 5142.055 - 5167.262: 0.8469% ( 45) 00:09:51.263 5167.262 - 5192.468: 1.2226% ( 63) 00:09:51.263 5192.468 - 5217.674: 1.5864% ( 61) 00:09:51.263 5217.674 - 5242.880: 1.9859% ( 67) 00:09:51.263 5242.880 - 5268.086: 2.5763% ( 99) 00:09:51.263 5268.086 - 5293.292: 3.2204% ( 108) 00:09:51.263 5293.292 - 5318.498: 3.9838% ( 128) 00:09:51.263 5318.498 - 5343.705: 4.7114% ( 122) 00:09:51.263 5343.705 - 5368.911: 5.5821% ( 146) 00:09:51.263 5368.911 - 5394.117: 6.5363% ( 160) 00:09:51.263 5394.117 - 5419.323: 7.4308% ( 150) 00:09:51.263 5419.323 - 5444.529: 8.4268% ( 167) 00:09:51.263 5444.529 - 5469.735: 9.4346% ( 169) 00:09:51.263 5469.735 - 5494.942: 10.3590% ( 155) 00:09:51.263 5494.942 - 5520.148: 11.3311% ( 163) 00:09:51.263 5520.148 - 5545.354: 12.3807% ( 176) 00:09:51.263 5545.354 - 5570.560: 13.4005% ( 171) 00:09:51.263 5570.560 - 5595.766: 14.4084% ( 169) 00:09:51.263 5595.766 - 5620.972: 15.4699% ( 178) 00:09:51.263 5620.972 - 5646.178: 16.5852% ( 187) 00:09:51.263 5646.178 - 5671.385: 17.6825% ( 184) 00:09:51.264 5671.385 - 5696.591: 18.7381% ( 177) 00:09:51.264 5696.591 - 5721.797: 19.8115% ( 180) 00:09:51.264 5721.797 - 5747.003: 20.8791% ( 179) 00:09:51.264 5747.003 - 5772.209: 21.9704% ( 183) 00:09:51.264 5772.209 - 5797.415: 23.0200% ( 176) 00:09:51.264 5797.415 - 5822.622: 24.1531% ( 190) 00:09:51.264 5822.622 - 5847.828: 25.2385% ( 182) 00:09:51.264 5847.828 - 5873.034: 26.3001% ( 178) 00:09:51.264 5873.034 - 5898.240: 27.4511% ( 193) 00:09:51.264 5898.240 - 5923.446: 28.5484% ( 184) 00:09:51.264 5923.446 - 5948.652: 29.6100% ( 178) 00:09:51.264 5948.652 - 5973.858: 30.7431% ( 190) 00:09:51.264 5973.858 - 5999.065: 31.8106% ( 179) 00:09:51.264 5999.065 - 6024.271: 32.9437% ( 190) 00:09:51.264 6024.271 - 6049.477: 34.0291% ( 182) 00:09:51.264 6049.477 - 6074.683: 35.1443% ( 187) 00:09:51.264 6074.683 - 6099.889: 36.2715% ( 189) 00:09:51.264 6099.889 - 6125.095: 37.3927% ( 188) 00:09:51.264 6125.095 - 6150.302: 38.5079% ( 187) 00:09:51.264 6150.302 - 6175.508: 39.6291% ( 188) 00:09:51.264 6175.508 - 6200.714: 40.7443% ( 187) 00:09:51.264 6200.714 - 6225.920: 41.9012% ( 194) 00:09:51.264 6225.920 - 6251.126: 43.0344% ( 190) 00:09:51.264 6251.126 - 6276.332: 44.1675% ( 190) 00:09:51.264 6276.332 - 6301.538: 45.3185% ( 193) 00:09:51.264 6301.538 - 6326.745: 46.4575% ( 191) 00:09:51.264 6326.745 - 6351.951: 47.6085% ( 193) 00:09:51.264 6351.951 - 6377.157: 48.8132% ( 202) 00:09:51.264 6377.157 - 6402.363: 49.9225% ( 186) 00:09:51.264 6402.363 - 6427.569: 51.0437% ( 188) 00:09:51.264 6427.569 - 6452.775: 52.2006% ( 194) 00:09:51.264 6452.775 - 6503.188: 54.4907% ( 384) 00:09:51.264 6503.188 - 6553.600: 56.7211% ( 374) 00:09:51.264 6553.600 - 6604.012: 58.8263% ( 353) 00:09:51.264 6604.012 - 6654.425: 60.9017% ( 348) 00:09:51.264 6654.425 - 6704.837: 62.7922% ( 317) 00:09:51.264 6704.837 - 6755.249: 64.5336% ( 292) 00:09:51.264 6755.249 - 6805.662: 66.1319% ( 268) 00:09:51.264 6805.662 - 6856.074: 67.5573% ( 239) 00:09:51.264 6856.074 - 6906.486: 68.7202% ( 195) 00:09:51.264 6906.486 - 6956.898: 69.6147% ( 150) 00:09:51.264 6956.898 - 7007.311: 70.3483% ( 123) 00:09:51.264 7007.311 - 7057.723: 70.9745% ( 105) 00:09:51.264 7057.723 - 7108.135: 71.4814% ( 85) 00:09:51.264 7108.135 - 7158.548: 71.9406% ( 77) 00:09:51.264 7158.548 - 7208.960: 72.3342% ( 66) 00:09:51.264 7208.960 - 7259.372: 72.7278% ( 66) 00:09:51.264 7259.372 - 7309.785: 73.0797% ( 59) 00:09:51.264 7309.785 - 7360.197: 73.3838% ( 51) 00:09:51.264 7360.197 - 7410.609: 73.6999% ( 53) 00:09:51.264 7410.609 - 7461.022: 73.9862% ( 48) 00:09:51.264 7461.022 - 7511.434: 74.3022% ( 53) 00:09:51.264 7511.434 - 7561.846: 74.5766% ( 46) 00:09:51.264 7561.846 - 7612.258: 74.8092% ( 39) 00:09:51.264 7612.258 - 7662.671: 75.0477% ( 40) 00:09:51.264 7662.671 - 7713.083: 75.2505% ( 34) 00:09:51.264 7713.083 - 7763.495: 75.4413% ( 32) 00:09:51.264 7763.495 - 7813.908: 75.6500% ( 35) 00:09:51.264 7813.908 - 7864.320: 75.8588% ( 35) 00:09:51.264 7864.320 - 7914.732: 76.0556% ( 33) 00:09:51.264 7914.732 - 7965.145: 76.2643% ( 35) 00:09:51.264 7965.145 - 8015.557: 76.4909% ( 38) 00:09:51.264 8015.557 - 8065.969: 76.7116% ( 37) 00:09:51.264 8065.969 - 8116.382: 76.9442% ( 39) 00:09:51.264 8116.382 - 8166.794: 77.1529% ( 35) 00:09:51.264 8166.794 - 8217.206: 77.3318% ( 30) 00:09:51.264 8217.206 - 8267.618: 77.4988% ( 28) 00:09:51.264 8267.618 - 8318.031: 77.6777% ( 30) 00:09:51.264 8318.031 - 8368.443: 77.8626% ( 31) 00:09:51.264 8368.443 - 8418.855: 78.0415% ( 30) 00:09:51.264 8418.855 - 8469.268: 78.2264% ( 31) 00:09:51.264 8469.268 - 8519.680: 78.4292% ( 34) 00:09:51.264 8519.680 - 8570.092: 78.6140% ( 31) 00:09:51.264 8570.092 - 8620.505: 78.8287% ( 36) 00:09:51.264 8620.505 - 8670.917: 79.0076% ( 30) 00:09:51.264 8670.917 - 8721.329: 79.1627% ( 26) 00:09:51.264 8721.329 - 8771.742: 79.3058% ( 24) 00:09:51.264 8771.742 - 8822.154: 79.4370% ( 22) 00:09:51.264 8822.154 - 8872.566: 79.5742% ( 23) 00:09:51.264 8872.566 - 8922.978: 79.7173% ( 24) 00:09:51.264 8922.978 - 8973.391: 79.8604% ( 24) 00:09:51.264 8973.391 - 9023.803: 79.9797% ( 20) 00:09:51.264 9023.803 - 9074.215: 80.1050% ( 21) 00:09:51.264 9074.215 - 9124.628: 80.2004% ( 16) 00:09:51.264 9124.628 - 9175.040: 80.3256% ( 21) 00:09:51.264 9175.040 - 9225.452: 80.4628% ( 23) 00:09:51.264 9225.452 - 9275.865: 80.5940% ( 22) 00:09:51.264 9275.865 - 9326.277: 80.7371% ( 24) 00:09:51.264 9326.277 - 9376.689: 80.8564% ( 20) 00:09:51.264 9376.689 - 9427.102: 80.9637% ( 18) 00:09:51.264 9427.102 - 9477.514: 81.0413% ( 13) 00:09:51.264 9477.514 - 9527.926: 81.1248% ( 14) 00:09:51.264 9527.926 - 9578.338: 81.2261% ( 17) 00:09:51.264 9578.338 - 9628.751: 81.3454% ( 20) 00:09:51.264 9628.751 - 9679.163: 81.4826% ( 23) 00:09:51.264 9679.163 - 9729.575: 81.6019% ( 20) 00:09:51.264 9729.575 - 9779.988: 81.7152% ( 19) 00:09:51.264 9779.988 - 9830.400: 81.8583% ( 24) 00:09:51.264 9830.400 - 9880.812: 81.9835% ( 21) 00:09:51.264 9880.812 - 9931.225: 82.1505% ( 28) 00:09:51.264 9931.225 - 9981.637: 82.3115% ( 27) 00:09:51.264 9981.637 - 10032.049: 82.4606% ( 25) 00:09:51.264 10032.049 - 10082.462: 82.6336% ( 29) 00:09:51.264 10082.462 - 10132.874: 82.7827% ( 25) 00:09:51.264 10132.874 - 10183.286: 82.9437% ( 27) 00:09:51.264 10183.286 - 10233.698: 83.0928% ( 25) 00:09:51.264 10233.698 - 10284.111: 83.2538% ( 27) 00:09:51.264 10284.111 - 10334.523: 83.4148% ( 27) 00:09:51.264 10334.523 - 10384.935: 83.5938% ( 30) 00:09:51.264 10384.935 - 10435.348: 83.7488% ( 26) 00:09:51.264 10435.348 - 10485.760: 83.9158% ( 28) 00:09:51.264 10485.760 - 10536.172: 84.0828% ( 28) 00:09:51.264 10536.172 - 10586.585: 84.2617% ( 30) 00:09:51.264 10586.585 - 10636.997: 84.4466% ( 31) 00:09:51.264 10636.997 - 10687.409: 84.6255% ( 30) 00:09:51.264 10687.409 - 10737.822: 84.8104% ( 31) 00:09:51.264 10737.822 - 10788.234: 84.9893% ( 30) 00:09:51.264 10788.234 - 10838.646: 85.2159% ( 38) 00:09:51.264 10838.646 - 10889.058: 85.4664% ( 42) 00:09:51.264 10889.058 - 10939.471: 85.6811% ( 36) 00:09:51.264 10939.471 - 10989.883: 85.8779% ( 33) 00:09:51.264 10989.883 - 11040.295: 86.0926% ( 36) 00:09:51.264 11040.295 - 11090.708: 86.3192% ( 38) 00:09:51.264 11090.708 - 11141.120: 86.5577% ( 40) 00:09:51.264 11141.120 - 11191.532: 86.7844% ( 38) 00:09:51.264 11191.532 - 11241.945: 87.0110% ( 38) 00:09:51.264 11241.945 - 11292.357: 87.2674% ( 43) 00:09:51.264 11292.357 - 11342.769: 87.5119% ( 41) 00:09:51.264 11342.769 - 11393.182: 87.7564% ( 41) 00:09:51.264 11393.182 - 11443.594: 88.0069% ( 42) 00:09:51.264 11443.594 - 11494.006: 88.2335% ( 38) 00:09:51.264 11494.006 - 11544.418: 88.4423% ( 35) 00:09:51.264 11544.418 - 11594.831: 88.6689% ( 38) 00:09:51.264 11594.831 - 11645.243: 88.8597% ( 32) 00:09:51.264 11645.243 - 11695.655: 89.0386% ( 30) 00:09:51.264 11695.655 - 11746.068: 89.2295% ( 32) 00:09:51.264 11746.068 - 11796.480: 89.4323% ( 34) 00:09:51.264 11796.480 - 11846.892: 89.6171% ( 31) 00:09:51.264 11846.892 - 11897.305: 89.7841% ( 28) 00:09:51.264 11897.305 - 11947.717: 89.9451% ( 27) 00:09:51.264 11947.717 - 11998.129: 90.1360% ( 32) 00:09:51.264 11998.129 - 12048.542: 90.3268% ( 32) 00:09:51.264 12048.542 - 12098.954: 90.5117% ( 31) 00:09:51.264 12098.954 - 12149.366: 90.7025% ( 32) 00:09:51.264 12149.366 - 12199.778: 90.9113% ( 35) 00:09:51.264 12199.778 - 12250.191: 91.0902% ( 30) 00:09:51.264 12250.191 - 12300.603: 91.2810% ( 32) 00:09:51.264 12300.603 - 12351.015: 91.4659% ( 31) 00:09:51.264 12351.015 - 12401.428: 91.6388% ( 29) 00:09:51.264 12401.428 - 12451.840: 91.8177% ( 30) 00:09:51.264 12451.840 - 12502.252: 91.9907% ( 29) 00:09:51.264 12502.252 - 12552.665: 92.1756% ( 31) 00:09:51.264 12552.665 - 12603.077: 92.3426% ( 28) 00:09:51.264 12603.077 - 12653.489: 92.5215% ( 30) 00:09:51.264 12653.489 - 12703.902: 92.7004% ( 30) 00:09:51.264 12703.902 - 12754.314: 92.8316% ( 22) 00:09:51.264 12754.314 - 12804.726: 92.9389% ( 18) 00:09:51.264 12804.726 - 12855.138: 93.0522% ( 19) 00:09:51.264 12855.138 - 12905.551: 93.1477% ( 16) 00:09:51.264 12905.551 - 13006.375: 93.3683% ( 37) 00:09:51.264 13006.375 - 13107.200: 93.5890% ( 37) 00:09:51.264 13107.200 - 13208.025: 93.8633% ( 46) 00:09:51.264 13208.025 - 13308.849: 94.1257% ( 44) 00:09:51.264 13308.849 - 13409.674: 94.3583% ( 39) 00:09:51.264 13409.674 - 13510.498: 94.5909% ( 39) 00:09:51.264 13510.498 - 13611.323: 94.7996% ( 35) 00:09:51.264 13611.323 - 13712.148: 94.9666% ( 28) 00:09:51.264 13712.148 - 13812.972: 95.1276% ( 27) 00:09:51.264 13812.972 - 13913.797: 95.3185% ( 32) 00:09:51.264 13913.797 - 14014.622: 95.5868% ( 45) 00:09:51.264 14014.622 - 14115.446: 95.7717% ( 31) 00:09:51.264 14115.446 - 14216.271: 95.9506% ( 30) 00:09:51.264 14216.271 - 14317.095: 96.1474% ( 33) 00:09:51.264 14317.095 - 14417.920: 96.3621% ( 36) 00:09:51.264 14417.920 - 14518.745: 96.5708% ( 35) 00:09:51.264 14518.745 - 14619.569: 96.7677% ( 33) 00:09:51.264 14619.569 - 14720.394: 96.9585% ( 32) 00:09:51.264 14720.394 - 14821.218: 97.1493% ( 32) 00:09:51.264 14821.218 - 14922.043: 97.3342% ( 31) 00:09:51.264 14922.043 - 15022.868: 97.4773% ( 24) 00:09:51.264 15022.868 - 15123.692: 97.5847% ( 18) 00:09:51.264 15123.692 - 15224.517: 97.7099% ( 21) 00:09:51.264 15224.517 - 15325.342: 97.8292% ( 20) 00:09:51.264 15325.342 - 15426.166: 97.9544% ( 21) 00:09:51.264 15426.166 - 15526.991: 98.0618% ( 18) 00:09:51.264 15526.991 - 15627.815: 98.1811% ( 20) 00:09:51.264 15627.815 - 15728.640: 98.2944% ( 19) 00:09:51.264 15728.640 - 15829.465: 98.4077% ( 19) 00:09:51.264 15829.465 - 15930.289: 98.4733% ( 11) 00:09:51.264 16837.711 - 16938.535: 98.4792% ( 1) 00:09:51.264 16938.535 - 17039.360: 98.5210% ( 7) 00:09:51.265 17039.360 - 17140.185: 98.5568% ( 6) 00:09:51.265 17140.185 - 17241.009: 98.5926% ( 6) 00:09:51.265 17241.009 - 17341.834: 98.6283% ( 6) 00:09:51.265 17341.834 - 17442.658: 98.6701% ( 7) 00:09:51.265 17442.658 - 17543.483: 98.7059% ( 6) 00:09:51.265 17543.483 - 17644.308: 98.7417% ( 6) 00:09:51.265 17644.308 - 17745.132: 98.7834% ( 7) 00:09:51.265 17745.132 - 17845.957: 98.8192% ( 6) 00:09:51.265 17845.957 - 17946.782: 98.8550% ( 6) 00:09:51.265 17946.782 - 18047.606: 98.8907% ( 6) 00:09:51.265 18047.606 - 18148.431: 98.9265% ( 6) 00:09:51.265 18148.431 - 18249.255: 98.9623% ( 6) 00:09:51.265 18249.255 - 18350.080: 98.9921% ( 5) 00:09:51.265 18350.080 - 18450.905: 99.0279% ( 6) 00:09:51.265 18450.905 - 18551.729: 99.0637% ( 6) 00:09:51.265 18551.729 - 18652.554: 99.0935% ( 5) 00:09:51.265 18652.554 - 18753.378: 99.1293% ( 6) 00:09:51.265 18753.378 - 18854.203: 99.1651% ( 6) 00:09:51.265 18854.203 - 18955.028: 99.1949% ( 5) 00:09:51.265 18955.028 - 19055.852: 99.2307% ( 6) 00:09:51.265 19055.852 - 19156.677: 99.2366% ( 1) 00:09:51.265 30852.332 - 31053.982: 99.2545% ( 3) 00:09:51.265 31053.982 - 31255.631: 99.2844% ( 5) 00:09:51.265 31255.631 - 31457.280: 99.3142% ( 5) 00:09:51.265 31457.280 - 31658.929: 99.3500% ( 6) 00:09:51.265 31658.929 - 31860.578: 99.3857% ( 6) 00:09:51.265 31860.578 - 32062.228: 99.4394% ( 9) 00:09:51.265 32062.228 - 32263.877: 99.4871% ( 8) 00:09:51.265 32263.877 - 32465.526: 99.5408% ( 9) 00:09:51.265 32465.526 - 32667.175: 99.5825% ( 7) 00:09:51.265 32667.175 - 32868.825: 99.6243% ( 7) 00:09:51.265 32868.825 - 33070.474: 99.6720% ( 8) 00:09:51.265 33070.474 - 33272.123: 99.7137% ( 7) 00:09:51.265 33272.123 - 33473.772: 99.7674% ( 9) 00:09:51.265 33473.772 - 33675.422: 99.8151% ( 8) 00:09:51.265 33675.422 - 33877.071: 99.8628% ( 8) 00:09:51.265 33877.071 - 34078.720: 99.9105% ( 8) 00:09:51.265 34078.720 - 34280.369: 99.9583% ( 8) 00:09:51.265 34280.369 - 34482.018: 100.0000% ( 7) 00:09:51.265 00:09:51.265 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:51.265 ============================================================================== 00:09:51.265 Range in us Cumulative IO count 00:09:51.265 4990.818 - 5016.025: 0.0473% ( 8) 00:09:51.265 5016.025 - 5041.231: 0.0769% ( 5) 00:09:51.265 5041.231 - 5066.437: 0.1361% ( 10) 00:09:51.265 5066.437 - 5091.643: 0.2367% ( 17) 00:09:51.265 5091.643 - 5116.849: 0.3492% ( 19) 00:09:51.265 5116.849 - 5142.055: 0.5268% ( 30) 00:09:51.265 5142.055 - 5167.262: 0.6925% ( 28) 00:09:51.265 5167.262 - 5192.468: 0.9706% ( 47) 00:09:51.265 5192.468 - 5217.674: 1.3909% ( 71) 00:09:51.265 5217.674 - 5242.880: 1.8880% ( 84) 00:09:51.265 5242.880 - 5268.086: 2.4266% ( 91) 00:09:51.265 5268.086 - 5293.292: 3.0954% ( 113) 00:09:51.265 5293.292 - 5318.498: 3.8293% ( 124) 00:09:51.265 5318.498 - 5343.705: 4.5395% ( 120) 00:09:51.265 5343.705 - 5368.911: 5.3326% ( 134) 00:09:51.265 5368.911 - 5394.117: 6.2500% ( 155) 00:09:51.265 5394.117 - 5419.323: 7.2206% ( 164) 00:09:51.265 5419.323 - 5444.529: 8.2860% ( 180) 00:09:51.265 5444.529 - 5469.735: 9.2330% ( 160) 00:09:51.265 5469.735 - 5494.942: 10.2332% ( 169) 00:09:51.265 5494.942 - 5520.148: 11.2334% ( 169) 00:09:51.265 5520.148 - 5545.354: 12.2573% ( 173) 00:09:51.265 5545.354 - 5570.560: 13.2517% ( 168) 00:09:51.265 5570.560 - 5595.766: 14.3466% ( 185) 00:09:51.265 5595.766 - 5620.972: 15.4356% ( 184) 00:09:51.265 5620.972 - 5646.178: 16.4595% ( 173) 00:09:51.265 5646.178 - 5671.385: 17.4893% ( 174) 00:09:51.265 5671.385 - 5696.591: 18.5488% ( 179) 00:09:51.265 5696.591 - 5721.797: 19.6615% ( 188) 00:09:51.265 5721.797 - 5747.003: 20.7150% ( 178) 00:09:51.265 5747.003 - 5772.209: 21.8336% ( 189) 00:09:51.265 5772.209 - 5797.415: 22.9640% ( 191) 00:09:51.265 5797.415 - 5822.622: 24.0353% ( 181) 00:09:51.265 5822.622 - 5847.828: 25.1184% ( 183) 00:09:51.265 5847.828 - 5873.034: 26.2074% ( 184) 00:09:51.265 5873.034 - 5898.240: 27.3023% ( 185) 00:09:51.265 5898.240 - 5923.446: 28.4032% ( 186) 00:09:51.265 5923.446 - 5948.652: 29.5632% ( 196) 00:09:51.265 5948.652 - 5973.858: 30.6937% ( 191) 00:09:51.265 5973.858 - 5999.065: 31.8300% ( 192) 00:09:51.265 5999.065 - 6024.271: 32.9723% ( 193) 00:09:51.265 6024.271 - 6049.477: 34.1383% ( 197) 00:09:51.265 6049.477 - 6074.683: 35.2569% ( 189) 00:09:51.265 6074.683 - 6099.889: 36.4051% ( 194) 00:09:51.265 6099.889 - 6125.095: 37.5355% ( 191) 00:09:51.265 6125.095 - 6150.302: 38.7015% ( 197) 00:09:51.265 6150.302 - 6175.508: 39.8733% ( 198) 00:09:51.265 6175.508 - 6200.714: 40.9860% ( 188) 00:09:51.265 6200.714 - 6225.920: 42.1106% ( 190) 00:09:51.265 6225.920 - 6251.126: 43.2706% ( 196) 00:09:51.265 6251.126 - 6276.332: 44.4188% ( 194) 00:09:51.265 6276.332 - 6301.538: 45.5196% ( 186) 00:09:51.265 6301.538 - 6326.745: 46.6915% ( 198) 00:09:51.265 6326.745 - 6351.951: 47.8812% ( 201) 00:09:51.265 6351.951 - 6377.157: 49.0589% ( 199) 00:09:51.265 6377.157 - 6402.363: 50.2131% ( 195) 00:09:51.265 6402.363 - 6427.569: 51.3435% ( 191) 00:09:51.265 6427.569 - 6452.775: 52.4503% ( 187) 00:09:51.265 6452.775 - 6503.188: 54.7704% ( 392) 00:09:51.265 6503.188 - 6553.600: 57.0135% ( 379) 00:09:51.265 6553.600 - 6604.012: 59.2034% ( 370) 00:09:51.265 6604.012 - 6654.425: 61.2630% ( 348) 00:09:51.265 6654.425 - 6704.837: 63.2102% ( 329) 00:09:51.265 6704.837 - 6755.249: 64.8911% ( 284) 00:09:51.265 6755.249 - 6805.662: 66.3944% ( 254) 00:09:51.265 6805.662 - 6856.074: 67.7083% ( 222) 00:09:51.265 6856.074 - 6906.486: 68.8269% ( 189) 00:09:51.265 6906.486 - 6956.898: 69.6911% ( 146) 00:09:51.265 6956.898 - 7007.311: 70.2829% ( 100) 00:09:51.265 7007.311 - 7057.723: 70.7919% ( 86) 00:09:51.265 7057.723 - 7108.135: 71.2062% ( 70) 00:09:51.265 7108.135 - 7158.548: 71.6027% ( 67) 00:09:51.265 7158.548 - 7208.960: 71.9460% ( 58) 00:09:51.265 7208.960 - 7259.372: 72.2775% ( 56) 00:09:51.265 7259.372 - 7309.785: 72.6562% ( 64) 00:09:51.265 7309.785 - 7360.197: 73.0410% ( 65) 00:09:51.265 7360.197 - 7410.609: 73.3902% ( 59) 00:09:51.265 7410.609 - 7461.022: 73.6802% ( 49) 00:09:51.265 7461.022 - 7511.434: 73.9347% ( 43) 00:09:51.265 7511.434 - 7561.846: 74.1892% ( 43) 00:09:51.265 7561.846 - 7612.258: 74.4614% ( 46) 00:09:51.265 7612.258 - 7662.671: 74.6922% ( 39) 00:09:51.265 7662.671 - 7713.083: 74.9349% ( 41) 00:09:51.265 7713.083 - 7763.495: 75.1657% ( 39) 00:09:51.265 7763.495 - 7813.908: 75.4084% ( 41) 00:09:51.265 7813.908 - 7864.320: 75.6333% ( 38) 00:09:51.265 7864.320 - 7914.732: 75.8582% ( 38) 00:09:51.265 7914.732 - 7965.145: 76.0831% ( 38) 00:09:51.265 7965.145 - 8015.557: 76.3021% ( 37) 00:09:51.265 8015.557 - 8065.969: 76.5152% ( 36) 00:09:51.265 8065.969 - 8116.382: 76.7045% ( 32) 00:09:51.265 8116.382 - 8166.794: 76.8643% ( 27) 00:09:51.265 8166.794 - 8217.206: 77.0419% ( 30) 00:09:51.265 8217.206 - 8267.618: 77.2135% ( 29) 00:09:51.265 8267.618 - 8318.031: 77.3852% ( 29) 00:09:51.265 8318.031 - 8368.443: 77.5509% ( 28) 00:09:51.265 8368.443 - 8418.855: 77.7048% ( 26) 00:09:51.265 8418.855 - 8469.268: 77.8587% ( 26) 00:09:51.265 8469.268 - 8519.680: 78.0185% ( 27) 00:09:51.265 8519.680 - 8570.092: 78.1783% ( 27) 00:09:51.265 8570.092 - 8620.505: 78.3321% ( 26) 00:09:51.265 8620.505 - 8670.917: 78.4920% ( 27) 00:09:51.265 8670.917 - 8721.329: 78.6399% ( 25) 00:09:51.265 8721.329 - 8771.742: 78.7938% ( 26) 00:09:51.265 8771.742 - 8822.154: 78.9181% ( 21) 00:09:51.265 8822.154 - 8872.566: 79.0601% ( 24) 00:09:51.265 8872.566 - 8922.978: 79.2022% ( 24) 00:09:51.265 8922.978 - 8973.391: 79.3383% ( 23) 00:09:51.265 8973.391 - 9023.803: 79.4804% ( 24) 00:09:51.265 9023.803 - 9074.215: 79.5869% ( 18) 00:09:51.265 9074.215 - 9124.628: 79.6934% ( 18) 00:09:51.265 9124.628 - 9175.040: 79.8118% ( 20) 00:09:51.265 9175.040 - 9225.452: 79.9065% ( 16) 00:09:51.265 9225.452 - 9275.865: 80.0012% ( 16) 00:09:51.265 9275.865 - 9326.277: 80.0900% ( 15) 00:09:51.265 9326.277 - 9376.689: 80.1787% ( 15) 00:09:51.265 9376.689 - 9427.102: 80.2734% ( 16) 00:09:51.265 9427.102 - 9477.514: 80.3563% ( 14) 00:09:51.265 9477.514 - 9527.926: 80.4451% ( 15) 00:09:51.265 9527.926 - 9578.338: 80.5339% ( 15) 00:09:51.265 9578.338 - 9628.751: 80.6581% ( 21) 00:09:51.265 9628.751 - 9679.163: 80.7765% ( 20) 00:09:51.265 9679.163 - 9729.575: 80.8890% ( 19) 00:09:51.265 9729.575 - 9779.988: 80.9896% ( 17) 00:09:51.265 9779.988 - 9830.400: 81.0961% ( 18) 00:09:51.265 9830.400 - 9880.812: 81.2027% ( 18) 00:09:51.265 9880.812 - 9931.225: 81.3506% ( 25) 00:09:51.265 9931.225 - 9981.637: 81.4986% ( 25) 00:09:51.265 9981.637 - 10032.049: 81.6406% ( 24) 00:09:51.265 10032.049 - 10082.462: 81.7590% ( 20) 00:09:51.265 10082.462 - 10132.874: 81.9129% ( 26) 00:09:51.265 10132.874 - 10183.286: 82.0608% ( 25) 00:09:51.265 10183.286 - 10233.698: 82.2029% ( 24) 00:09:51.265 10233.698 - 10284.111: 82.3449% ( 24) 00:09:51.265 10284.111 - 10334.523: 82.5225% ( 30) 00:09:51.265 10334.523 - 10384.935: 82.6882% ( 28) 00:09:51.265 10384.935 - 10435.348: 82.8598% ( 29) 00:09:51.265 10435.348 - 10485.760: 83.0374% ( 30) 00:09:51.265 10485.760 - 10536.172: 83.2446% ( 35) 00:09:51.265 10536.172 - 10586.585: 83.4162% ( 29) 00:09:51.265 10586.585 - 10636.997: 83.6233% ( 35) 00:09:51.265 10636.997 - 10687.409: 83.8482% ( 38) 00:09:51.265 10687.409 - 10737.822: 84.1027% ( 43) 00:09:51.265 10737.822 - 10788.234: 84.3217% ( 37) 00:09:51.265 10788.234 - 10838.646: 84.5289% ( 35) 00:09:51.265 10838.646 - 10889.058: 84.7242% ( 33) 00:09:51.265 10889.058 - 10939.471: 84.9077% ( 31) 00:09:51.265 10939.471 - 10989.883: 85.0852% ( 30) 00:09:51.266 10989.883 - 11040.295: 85.2687% ( 31) 00:09:51.266 11040.295 - 11090.708: 85.4522% ( 31) 00:09:51.266 11090.708 - 11141.120: 85.6534% ( 34) 00:09:51.266 11141.120 - 11191.532: 85.8665% ( 36) 00:09:51.266 11191.532 - 11241.945: 86.0736% ( 35) 00:09:51.266 11241.945 - 11292.357: 86.2808% ( 35) 00:09:51.266 11292.357 - 11342.769: 86.4879% ( 35) 00:09:51.266 11342.769 - 11393.182: 86.6951% ( 35) 00:09:51.266 11393.182 - 11443.594: 86.9377% ( 41) 00:09:51.266 11443.594 - 11494.006: 87.1745% ( 40) 00:09:51.266 11494.006 - 11544.418: 87.3935% ( 37) 00:09:51.266 11544.418 - 11594.831: 87.6361% ( 41) 00:09:51.266 11594.831 - 11645.243: 87.8729% ( 40) 00:09:51.266 11645.243 - 11695.655: 88.0800% ( 35) 00:09:51.266 11695.655 - 11746.068: 88.2990% ( 37) 00:09:51.266 11746.068 - 11796.480: 88.5002% ( 34) 00:09:51.266 11796.480 - 11846.892: 88.7133% ( 36) 00:09:51.266 11846.892 - 11897.305: 88.9264% ( 36) 00:09:51.266 11897.305 - 11947.717: 89.1513% ( 38) 00:09:51.266 11947.717 - 11998.129: 89.3762% ( 38) 00:09:51.266 11998.129 - 12048.542: 89.6129% ( 40) 00:09:51.266 12048.542 - 12098.954: 89.8082% ( 33) 00:09:51.266 12098.954 - 12149.366: 90.0095% ( 34) 00:09:51.266 12149.366 - 12199.778: 90.2817% ( 46) 00:09:51.266 12199.778 - 12250.191: 90.5362% ( 43) 00:09:51.266 12250.191 - 12300.603: 90.7789% ( 41) 00:09:51.266 12300.603 - 12351.015: 91.0215% ( 41) 00:09:51.266 12351.015 - 12401.428: 91.2169% ( 33) 00:09:51.266 12401.428 - 12451.840: 91.4181% ( 34) 00:09:51.266 12451.840 - 12502.252: 91.5897% ( 29) 00:09:51.266 12502.252 - 12552.665: 91.7554% ( 28) 00:09:51.266 12552.665 - 12603.077: 91.9152% ( 27) 00:09:51.266 12603.077 - 12653.489: 92.0810% ( 28) 00:09:51.266 12653.489 - 12703.902: 92.2763% ( 33) 00:09:51.266 12703.902 - 12754.314: 92.4598% ( 31) 00:09:51.266 12754.314 - 12804.726: 92.6255% ( 28) 00:09:51.266 12804.726 - 12855.138: 92.7971% ( 29) 00:09:51.266 12855.138 - 12905.551: 92.9569% ( 27) 00:09:51.266 12905.551 - 13006.375: 93.3061% ( 59) 00:09:51.266 13006.375 - 13107.200: 93.6494% ( 58) 00:09:51.266 13107.200 - 13208.025: 93.9394% ( 49) 00:09:51.266 13208.025 - 13308.849: 94.2116% ( 46) 00:09:51.266 13308.849 - 13409.674: 94.4780% ( 45) 00:09:51.266 13409.674 - 13510.498: 94.7443% ( 45) 00:09:51.266 13510.498 - 13611.323: 94.9751% ( 39) 00:09:51.266 13611.323 - 13712.148: 95.1645% ( 32) 00:09:51.266 13712.148 - 13812.972: 95.3184% ( 26) 00:09:51.266 13812.972 - 13913.797: 95.4723% ( 26) 00:09:51.266 13913.797 - 14014.622: 95.6084% ( 23) 00:09:51.266 14014.622 - 14115.446: 95.7386% ( 22) 00:09:51.266 14115.446 - 14216.271: 95.8748% ( 23) 00:09:51.266 14216.271 - 14317.095: 96.0050% ( 22) 00:09:51.266 14317.095 - 14417.920: 96.1352% ( 22) 00:09:51.266 14417.920 - 14518.745: 96.2476% ( 19) 00:09:51.266 14518.745 - 14619.569: 96.3719% ( 21) 00:09:51.266 14619.569 - 14720.394: 96.4844% ( 19) 00:09:51.266 14720.394 - 14821.218: 96.6027% ( 20) 00:09:51.266 14821.218 - 14922.043: 96.7093% ( 18) 00:09:51.266 14922.043 - 15022.868: 96.7981% ( 15) 00:09:51.266 15022.868 - 15123.692: 96.8987% ( 17) 00:09:51.266 15123.692 - 15224.517: 96.9875% ( 15) 00:09:51.266 15224.517 - 15325.342: 97.0881% ( 17) 00:09:51.266 15325.342 - 15426.166: 97.2064% ( 20) 00:09:51.266 15426.166 - 15526.991: 97.3307% ( 21) 00:09:51.266 15526.991 - 15627.815: 97.4787% ( 25) 00:09:51.266 15627.815 - 15728.640: 97.6030% ( 21) 00:09:51.266 15728.640 - 15829.465: 97.7509% ( 25) 00:09:51.266 15829.465 - 15930.289: 97.8575% ( 18) 00:09:51.266 15930.289 - 16031.114: 97.9463% ( 15) 00:09:51.266 16031.114 - 16131.938: 98.0291% ( 14) 00:09:51.266 16131.938 - 16232.763: 98.1120% ( 14) 00:09:51.266 16232.763 - 16333.588: 98.2008% ( 15) 00:09:51.266 16333.588 - 16434.412: 98.2777% ( 13) 00:09:51.266 16434.412 - 16535.237: 98.3487% ( 12) 00:09:51.266 16535.237 - 16636.062: 98.4138% ( 11) 00:09:51.266 16636.062 - 16736.886: 98.4908% ( 13) 00:09:51.266 16736.886 - 16837.711: 98.5677% ( 13) 00:09:51.266 16837.711 - 16938.535: 98.6387% ( 12) 00:09:51.266 16938.535 - 17039.360: 98.7157% ( 13) 00:09:51.266 17039.360 - 17140.185: 98.7808% ( 11) 00:09:51.266 17140.185 - 17241.009: 98.8577% ( 13) 00:09:51.266 17241.009 - 17341.834: 98.9169% ( 10) 00:09:51.266 17341.834 - 17442.658: 98.9524% ( 6) 00:09:51.266 17442.658 - 17543.483: 98.9879% ( 6) 00:09:51.266 17543.483 - 17644.308: 99.0234% ( 6) 00:09:51.266 17644.308 - 17745.132: 99.0885% ( 11) 00:09:51.266 17745.132 - 17845.957: 99.1477% ( 10) 00:09:51.266 17845.957 - 17946.782: 99.2069% ( 10) 00:09:51.266 17946.782 - 18047.606: 99.2602% ( 9) 00:09:51.266 18047.606 - 18148.431: 99.3194% ( 10) 00:09:51.266 18148.431 - 18249.255: 99.3726% ( 9) 00:09:51.266 18249.255 - 18350.080: 99.4200% ( 8) 00:09:51.266 18350.080 - 18450.905: 99.4437% ( 4) 00:09:51.266 18450.905 - 18551.729: 99.4732% ( 5) 00:09:51.266 18551.729 - 18652.554: 99.4969% ( 4) 00:09:51.266 18652.554 - 18753.378: 99.5206% ( 4) 00:09:51.266 18753.378 - 18854.203: 99.5443% ( 4) 00:09:51.266 18854.203 - 18955.028: 99.5739% ( 5) 00:09:51.266 18955.028 - 19055.852: 99.5975% ( 4) 00:09:51.266 19055.852 - 19156.677: 99.6212% ( 4) 00:09:51.266 19156.677 - 19257.502: 99.6449% ( 4) 00:09:51.266 19257.502 - 19358.326: 99.6745% ( 5) 00:09:51.266 19358.326 - 19459.151: 99.6922% ( 3) 00:09:51.266 19459.151 - 19559.975: 99.7218% ( 5) 00:09:51.266 19559.975 - 19660.800: 99.7455% ( 4) 00:09:51.266 19660.800 - 19761.625: 99.7692% ( 4) 00:09:51.266 19761.625 - 19862.449: 99.7929% ( 4) 00:09:51.266 19862.449 - 19963.274: 99.8224% ( 5) 00:09:51.266 19963.274 - 20064.098: 99.8461% ( 4) 00:09:51.266 20064.098 - 20164.923: 99.8698% ( 4) 00:09:51.266 20164.923 - 20265.748: 99.8935% ( 4) 00:09:51.266 20265.748 - 20366.572: 99.9231% ( 5) 00:09:51.266 20366.572 - 20467.397: 99.9467% ( 4) 00:09:51.266 20467.397 - 20568.222: 99.9704% ( 4) 00:09:51.266 20568.222 - 20669.046: 99.9941% ( 4) 00:09:51.266 20669.046 - 20769.871: 100.0000% ( 1) 00:09:51.266 00:09:51.266 16:19:10 -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:52.642 Initializing NVMe Controllers 00:09:52.642 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:09:52.642 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:09:52.642 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:09:52.642 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:09:52.642 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:09:52.642 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:09:52.642 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:09:52.642 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:09:52.642 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:09:52.642 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:09:52.642 Initialization complete. Launching workers. 00:09:52.642 ======================================================== 00:09:52.642 Latency(us) 00:09:52.642 Device Information : IOPS MiB/s Average min max 00:09:52.642 PCIE (0000:00:06.0) NSID 1 from core 0: 19017.08 222.86 6729.95 4712.24 25347.16 00:09:52.642 PCIE (0000:00:07.0) NSID 1 from core 0: 19017.08 222.86 6728.07 4968.74 25094.41 00:09:52.642 PCIE (0000:00:09.0) NSID 1 from core 0: 19017.08 222.86 6725.72 4729.63 25223.86 00:09:52.642 PCIE (0000:00:08.0) NSID 1 from core 0: 19017.08 222.86 6723.21 4674.51 25067.28 00:09:52.642 PCIE (0000:00:08.0) NSID 2 from core 0: 19017.08 222.86 6720.83 4884.70 24544.49 00:09:52.642 PCIE (0000:00:08.0) NSID 3 from core 0: 19017.08 222.86 6715.19 4760.70 24120.02 00:09:52.642 ======================================================== 00:09:52.642 Total : 114102.47 1337.14 6723.83 4674.51 25347.16 00:09:52.642 00:09:52.642 Summary latency data for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:52.642 ================================================================================= 00:09:52.642 1.00000% : 5167.262us 00:09:52.642 10.00000% : 5671.385us 00:09:52.642 25.00000% : 5948.652us 00:09:52.642 50.00000% : 6351.951us 00:09:52.642 75.00000% : 6956.898us 00:09:52.642 90.00000% : 7713.083us 00:09:52.642 95.00000% : 9376.689us 00:09:52.642 98.00000% : 11443.594us 00:09:52.642 99.00000% : 12703.902us 00:09:52.642 99.50000% : 23290.486us 00:09:52.642 99.90000% : 24903.680us 00:09:52.642 99.99000% : 25306.978us 00:09:52.642 99.99900% : 25407.803us 00:09:52.642 99.99990% : 25407.803us 00:09:52.642 99.99999% : 25407.803us 00:09:52.642 00:09:52.642 Summary latency data for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:52.642 ================================================================================= 00:09:52.642 1.00000% : 5368.911us 00:09:52.642 10.00000% : 5822.622us 00:09:52.642 25.00000% : 6099.889us 00:09:52.642 50.00000% : 6351.951us 00:09:52.642 75.00000% : 6755.249us 00:09:52.642 90.00000% : 7713.083us 00:09:52.642 95.00000% : 9477.514us 00:09:52.642 98.00000% : 11191.532us 00:09:52.642 99.00000% : 12552.665us 00:09:52.642 99.50000% : 22483.889us 00:09:52.642 99.90000% : 24500.382us 00:09:52.642 99.99000% : 25105.329us 00:09:52.642 99.99900% : 25105.329us 00:09:52.642 99.99990% : 25105.329us 00:09:52.642 99.99999% : 25105.329us 00:09:52.642 00:09:52.642 Summary latency data for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:52.643 ================================================================================= 00:09:52.643 1.00000% : 5293.292us 00:09:52.643 10.00000% : 5747.003us 00:09:52.643 25.00000% : 6074.683us 00:09:52.643 50.00000% : 6351.951us 00:09:52.643 75.00000% : 6805.662us 00:09:52.643 90.00000% : 7713.083us 00:09:52.643 95.00000% : 9578.338us 00:09:52.643 98.00000% : 10939.471us 00:09:52.643 99.00000% : 13208.025us 00:09:52.643 99.50000% : 22685.538us 00:09:52.643 99.90000% : 24802.855us 00:09:52.643 99.99000% : 25206.154us 00:09:52.643 99.99900% : 25306.978us 00:09:52.643 99.99990% : 25306.978us 00:09:52.643 99.99999% : 25306.978us 00:09:52.643 00:09:52.643 Summary latency data for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:52.643 ================================================================================= 00:09:52.643 1.00000% : 5343.705us 00:09:52.643 10.00000% : 5822.622us 00:09:52.643 25.00000% : 6099.889us 00:09:52.643 50.00000% : 6326.745us 00:09:52.643 75.00000% : 6755.249us 00:09:52.643 90.00000% : 7612.258us 00:09:52.643 95.00000% : 9527.926us 00:09:52.643 98.00000% : 11241.945us 00:09:52.643 99.00000% : 13308.849us 00:09:52.643 99.50000% : 22383.065us 00:09:52.643 99.90000% : 24601.206us 00:09:52.643 99.99000% : 25105.329us 00:09:52.643 99.99900% : 25105.329us 00:09:52.643 99.99990% : 25105.329us 00:09:52.643 99.99999% : 25105.329us 00:09:52.643 00:09:52.643 Summary latency data for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:52.643 ================================================================================= 00:09:52.643 1.00000% : 5343.705us 00:09:52.643 10.00000% : 5822.622us 00:09:52.643 25.00000% : 6099.889us 00:09:52.643 50.00000% : 6326.745us 00:09:52.643 75.00000% : 6755.249us 00:09:52.643 90.00000% : 7763.495us 00:09:52.643 95.00000% : 9074.215us 00:09:52.643 98.00000% : 11544.418us 00:09:52.643 99.00000% : 12804.726us 00:09:52.643 99.50000% : 22887.188us 00:09:52.643 99.90000% : 23996.258us 00:09:52.643 99.99000% : 24601.206us 00:09:52.643 99.99900% : 24601.206us 00:09:52.643 99.99990% : 24601.206us 00:09:52.643 99.99999% : 24601.206us 00:09:52.643 00:09:52.643 Summary latency data for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:52.643 ================================================================================= 00:09:52.643 1.00000% : 5343.705us 00:09:52.643 10.00000% : 5822.622us 00:09:52.643 25.00000% : 6074.683us 00:09:52.643 50.00000% : 6326.745us 00:09:52.643 75.00000% : 6755.249us 00:09:52.643 90.00000% : 7864.320us 00:09:52.643 95.00000% : 9023.803us 00:09:52.643 98.00000% : 11695.655us 00:09:52.643 99.00000% : 12905.551us 00:09:52.643 99.50000% : 22282.240us 00:09:52.643 99.90000% : 23592.960us 00:09:52.643 99.99000% : 24097.083us 00:09:52.643 99.99900% : 24197.908us 00:09:52.643 99.99990% : 24197.908us 00:09:52.643 99.99999% : 24197.908us 00:09:52.643 00:09:52.643 Latency histogram for PCIE (0000:00:06.0) NSID 1 from core 0: 00:09:52.643 ============================================================================== 00:09:52.643 Range in us Cumulative IO count 00:09:52.643 4688.345 - 4713.551: 0.0052% ( 1) 00:09:52.643 4763.963 - 4789.169: 0.0105% ( 1) 00:09:52.643 4789.169 - 4814.375: 0.0157% ( 1) 00:09:52.643 4814.375 - 4839.582: 0.0419% ( 5) 00:09:52.643 4839.582 - 4864.788: 0.0577% ( 3) 00:09:52.643 4864.788 - 4889.994: 0.0682% ( 2) 00:09:52.643 4889.994 - 4915.200: 0.0996% ( 6) 00:09:52.643 4915.200 - 4940.406: 0.1154% ( 3) 00:09:52.643 4940.406 - 4965.612: 0.1573% ( 8) 00:09:52.643 4965.612 - 4990.818: 0.2097% ( 10) 00:09:52.643 4990.818 - 5016.025: 0.2464% ( 7) 00:09:52.643 5016.025 - 5041.231: 0.3461% ( 19) 00:09:52.643 5041.231 - 5066.437: 0.4824% ( 26) 00:09:52.643 5066.437 - 5091.643: 0.6292% ( 28) 00:09:52.643 5091.643 - 5116.849: 0.7760% ( 28) 00:09:52.643 5116.849 - 5142.055: 0.9071% ( 25) 00:09:52.643 5142.055 - 5167.262: 1.0487% ( 27) 00:09:52.643 5167.262 - 5192.468: 1.1483% ( 19) 00:09:52.643 5192.468 - 5217.674: 1.3633% ( 41) 00:09:52.643 5217.674 - 5242.880: 1.5677% ( 39) 00:09:52.643 5242.880 - 5268.086: 1.7460% ( 34) 00:09:52.643 5268.086 - 5293.292: 2.0501% ( 58) 00:09:52.643 5293.292 - 5318.498: 2.3647% ( 60) 00:09:52.643 5318.498 - 5343.705: 2.8419% ( 91) 00:09:52.643 5343.705 - 5368.911: 3.4186% ( 110) 00:09:52.643 5368.911 - 5394.117: 3.9325% ( 98) 00:09:52.643 5394.117 - 5419.323: 4.4044% ( 90) 00:09:52.643 5419.323 - 5444.529: 4.8448% ( 84) 00:09:52.643 5444.529 - 5469.735: 5.3272% ( 92) 00:09:52.643 5469.735 - 5494.942: 5.7519% ( 81) 00:09:52.643 5494.942 - 5520.148: 6.1818% ( 82) 00:09:52.643 5520.148 - 5545.354: 6.6485% ( 89) 00:09:52.643 5545.354 - 5570.560: 7.3668% ( 137) 00:09:52.643 5570.560 - 5595.766: 7.8649% ( 95) 00:09:52.643 5595.766 - 5620.972: 8.6881% ( 157) 00:09:52.643 5620.972 - 5646.178: 9.6739% ( 188) 00:09:52.643 5646.178 - 5671.385: 10.5600% ( 169) 00:09:52.643 5671.385 - 5696.591: 11.4723% ( 174) 00:09:52.643 5696.591 - 5721.797: 12.4790% ( 192) 00:09:52.643 5721.797 - 5747.003: 13.6273% ( 219) 00:09:52.643 5747.003 - 5772.209: 14.8228% ( 228) 00:09:52.643 5772.209 - 5797.415: 16.5898% ( 337) 00:09:52.643 5797.415 - 5822.622: 18.2204% ( 311) 00:09:52.643 5822.622 - 5847.828: 20.0084% ( 341) 00:09:52.643 5847.828 - 5873.034: 21.4346% ( 272) 00:09:52.643 5873.034 - 5898.240: 23.1596% ( 329) 00:09:52.643 5898.240 - 5923.446: 24.6435% ( 283) 00:09:52.643 5923.446 - 5948.652: 26.3318% ( 322) 00:09:52.643 5948.652 - 5973.858: 27.7632% ( 273) 00:09:52.643 5973.858 - 5999.065: 29.3362% ( 300) 00:09:52.643 5999.065 - 6024.271: 31.1923% ( 354) 00:09:52.643 6024.271 - 6049.477: 32.9069% ( 327) 00:09:52.643 6049.477 - 6074.683: 34.7473% ( 351) 00:09:52.643 6074.683 - 6099.889: 36.3622% ( 308) 00:09:52.643 6099.889 - 6125.095: 38.1344% ( 338) 00:09:52.643 6125.095 - 6150.302: 39.5187% ( 264) 00:09:52.643 6150.302 - 6175.508: 40.7980% ( 244) 00:09:52.643 6175.508 - 6200.714: 42.1456% ( 257) 00:09:52.643 6200.714 - 6225.920: 43.5770% ( 273) 00:09:52.643 6225.920 - 6251.126: 44.9874% ( 269) 00:09:52.643 6251.126 - 6276.332: 46.4346% ( 276) 00:09:52.643 6276.332 - 6301.538: 47.6510% ( 232) 00:09:52.643 6301.538 - 6326.745: 48.9356% ( 245) 00:09:52.643 6326.745 - 6351.951: 50.3303% ( 266) 00:09:52.643 6351.951 - 6377.157: 51.7146% ( 264) 00:09:52.643 6377.157 - 6402.363: 52.9572% ( 237) 00:09:52.643 6402.363 - 6427.569: 54.1894% ( 235) 00:09:52.643 6427.569 - 6452.775: 55.4058% ( 232) 00:09:52.643 6452.775 - 6503.188: 57.5136% ( 402) 00:09:52.643 6503.188 - 6553.600: 59.5847% ( 395) 00:09:52.643 6553.600 - 6604.012: 61.7397% ( 411) 00:09:52.643 6604.012 - 6654.425: 63.9524% ( 422) 00:09:52.643 6654.425 - 6704.837: 66.1021% ( 410) 00:09:52.643 6704.837 - 6755.249: 68.1418% ( 389) 00:09:52.643 6755.249 - 6805.662: 70.4593% ( 442) 00:09:52.643 6805.662 - 6856.074: 72.4622% ( 382) 00:09:52.643 6856.074 - 6906.486: 74.3760% ( 365) 00:09:52.643 6906.486 - 6956.898: 76.3842% ( 383) 00:09:52.643 6956.898 - 7007.311: 78.3767% ( 380) 00:09:52.643 7007.311 - 7057.723: 80.1594% ( 340) 00:09:52.643 7057.723 - 7108.135: 81.8058% ( 314) 00:09:52.643 7108.135 - 7158.548: 83.1481% ( 256) 00:09:52.643 7158.548 - 7208.960: 84.1338% ( 188) 00:09:52.643 7208.960 - 7259.372: 84.9885% ( 163) 00:09:52.643 7259.372 - 7309.785: 85.7802% ( 151) 00:09:52.643 7309.785 - 7360.197: 86.4985% ( 137) 00:09:52.643 7360.197 - 7410.609: 87.1382% ( 122) 00:09:52.643 7410.609 - 7461.022: 87.7569% ( 118) 00:09:52.643 7461.022 - 7511.434: 88.3704% ( 117) 00:09:52.643 7511.434 - 7561.846: 88.7794% ( 78) 00:09:52.643 7561.846 - 7612.258: 89.2984% ( 99) 00:09:52.643 7612.258 - 7662.671: 89.8018% ( 96) 00:09:52.643 7662.671 - 7713.083: 90.0273% ( 43) 00:09:52.643 7713.083 - 7763.495: 90.2789% ( 48) 00:09:52.643 7763.495 - 7813.908: 90.5831% ( 58) 00:09:52.643 7813.908 - 7864.320: 90.8085% ( 43) 00:09:52.643 7864.320 - 7914.732: 90.9868% ( 34) 00:09:52.643 7914.732 - 7965.145: 91.1441% ( 30) 00:09:52.643 7965.145 - 8015.557: 91.3224% ( 34) 00:09:52.643 8015.557 - 8065.969: 91.5059% ( 35) 00:09:52.643 8065.969 - 8116.382: 91.6789% ( 33) 00:09:52.643 8116.382 - 8166.794: 91.8991% ( 42) 00:09:52.643 8166.794 - 8217.206: 92.1403% ( 46) 00:09:52.643 8217.206 - 8267.618: 92.3186% ( 34) 00:09:52.643 8267.618 - 8318.031: 92.4497% ( 25) 00:09:52.643 8318.031 - 8368.443: 92.5755% ( 24) 00:09:52.643 8368.443 - 8418.855: 92.6856% ( 21) 00:09:52.643 8418.855 - 8469.268: 92.8219% ( 26) 00:09:52.643 8469.268 - 8519.680: 92.9216% ( 19) 00:09:52.643 8519.680 - 8570.092: 93.0789% ( 30) 00:09:52.644 8570.092 - 8620.505: 93.2152% ( 26) 00:09:52.644 8620.505 - 8670.917: 93.2991% ( 16) 00:09:52.644 8670.917 - 8721.329: 93.4249% ( 24) 00:09:52.644 8721.329 - 8771.742: 93.5403% ( 22) 00:09:52.644 8771.742 - 8822.154: 93.6714% ( 25) 00:09:52.644 8822.154 - 8872.566: 93.7710% ( 19) 00:09:52.644 8872.566 - 8922.978: 93.8758% ( 20) 00:09:52.644 8922.978 - 8973.391: 94.0069% ( 25) 00:09:52.644 8973.391 - 9023.803: 94.1170% ( 21) 00:09:52.644 9023.803 - 9074.215: 94.2534% ( 26) 00:09:52.644 9074.215 - 9124.628: 94.4421% ( 36) 00:09:52.644 9124.628 - 9175.040: 94.5732% ( 25) 00:09:52.644 9175.040 - 9225.452: 94.7305% ( 30) 00:09:52.644 9225.452 - 9275.865: 94.8563% ( 24) 00:09:52.644 9275.865 - 9326.277: 94.9612% ( 20) 00:09:52.644 9326.277 - 9376.689: 95.0870% ( 24) 00:09:52.644 9376.689 - 9427.102: 95.2496% ( 31) 00:09:52.644 9427.102 - 9477.514: 95.3964% ( 28) 00:09:52.644 9477.514 - 9527.926: 95.5432% ( 28) 00:09:52.644 9527.926 - 9578.338: 95.6323% ( 17) 00:09:52.644 9578.338 - 9628.751: 95.7057% ( 14) 00:09:52.644 9628.751 - 9679.163: 95.7949% ( 17) 00:09:52.644 9679.163 - 9729.575: 95.8368% ( 8) 00:09:52.644 9729.575 - 9779.988: 95.9155% ( 15) 00:09:52.644 9779.988 - 9830.400: 95.9994% ( 16) 00:09:52.644 9830.400 - 9880.812: 96.0728% ( 14) 00:09:52.644 9880.812 - 9931.225: 96.1147% ( 8) 00:09:52.644 9931.225 - 9981.637: 96.1462% ( 6) 00:09:52.644 9981.637 - 10032.049: 96.1986% ( 10) 00:09:52.644 10032.049 - 10082.462: 96.2563% ( 11) 00:09:52.644 10082.462 - 10132.874: 96.3507% ( 18) 00:09:52.644 10132.874 - 10183.286: 96.4870% ( 26) 00:09:52.644 10183.286 - 10233.698: 96.6286% ( 27) 00:09:52.644 10233.698 - 10284.111: 96.7701% ( 27) 00:09:52.644 10284.111 - 10334.523: 96.8802% ( 21) 00:09:52.644 10334.523 - 10384.935: 96.9274% ( 9) 00:09:52.644 10384.935 - 10435.348: 96.9904% ( 12) 00:09:52.644 10435.348 - 10485.760: 97.0638% ( 14) 00:09:52.644 10485.760 - 10536.172: 97.1319% ( 13) 00:09:52.644 10536.172 - 10586.585: 97.2053% ( 14) 00:09:52.644 10586.585 - 10636.997: 97.2578% ( 10) 00:09:52.644 10636.997 - 10687.409: 97.3312% ( 14) 00:09:52.644 10687.409 - 10737.822: 97.3836% ( 10) 00:09:52.644 10737.822 - 10788.234: 97.4518% ( 13) 00:09:52.644 10788.234 - 10838.646: 97.5147% ( 12) 00:09:52.644 10838.646 - 10889.058: 97.5461% ( 6) 00:09:52.644 10889.058 - 10939.471: 97.6091% ( 12) 00:09:52.644 10939.471 - 10989.883: 97.6405% ( 6) 00:09:52.644 10989.883 - 11040.295: 97.6930% ( 10) 00:09:52.644 11040.295 - 11090.708: 97.7611% ( 13) 00:09:52.644 11090.708 - 11141.120: 97.8188% ( 11) 00:09:52.644 11141.120 - 11191.532: 97.8398% ( 4) 00:09:52.644 11191.532 - 11241.945: 97.8660% ( 5) 00:09:52.644 11241.945 - 11292.357: 97.8922% ( 5) 00:09:52.644 11292.357 - 11342.769: 97.9289% ( 7) 00:09:52.644 11342.769 - 11393.182: 97.9971% ( 13) 00:09:52.644 11393.182 - 11443.594: 98.0495% ( 10) 00:09:52.644 11443.594 - 11494.006: 98.0914% ( 8) 00:09:52.644 11494.006 - 11544.418: 98.1281% ( 7) 00:09:52.644 11544.418 - 11594.831: 98.1596% ( 6) 00:09:52.644 11594.831 - 11645.243: 98.2120% ( 10) 00:09:52.644 11645.243 - 11695.655: 98.2540% ( 8) 00:09:52.644 11695.655 - 11746.068: 98.2959% ( 8) 00:09:52.644 11746.068 - 11796.480: 98.3484% ( 10) 00:09:52.644 11796.480 - 11846.892: 98.3903% ( 8) 00:09:52.644 11846.892 - 11897.305: 98.4532% ( 12) 00:09:52.644 11897.305 - 11947.717: 98.4899% ( 7) 00:09:52.644 11947.717 - 11998.129: 98.5371% ( 9) 00:09:52.644 11998.129 - 12048.542: 98.5948% ( 11) 00:09:52.644 12048.542 - 12098.954: 98.6525% ( 11) 00:09:52.644 12098.954 - 12149.366: 98.6944% ( 8) 00:09:52.644 12149.366 - 12199.778: 98.7416% ( 9) 00:09:52.644 12199.778 - 12250.191: 98.7731% ( 6) 00:09:52.644 12250.191 - 12300.603: 98.8307% ( 11) 00:09:52.644 12300.603 - 12351.015: 98.8674% ( 7) 00:09:52.644 12351.015 - 12401.428: 98.8937% ( 5) 00:09:52.644 12401.428 - 12451.840: 98.9094% ( 3) 00:09:52.644 12451.840 - 12502.252: 98.9251% ( 3) 00:09:52.644 12502.252 - 12552.665: 98.9409% ( 3) 00:09:52.644 12552.665 - 12603.077: 98.9618% ( 4) 00:09:52.644 12603.077 - 12653.489: 98.9880% ( 5) 00:09:52.644 12653.489 - 12703.902: 99.0247% ( 7) 00:09:52.644 12703.902 - 12754.314: 99.0457% ( 4) 00:09:52.644 12754.314 - 12804.726: 99.0772% ( 6) 00:09:52.644 12804.726 - 12855.138: 99.0982% ( 4) 00:09:52.644 12855.138 - 12905.551: 99.1139% ( 3) 00:09:52.644 12905.551 - 13006.375: 99.1453% ( 6) 00:09:52.644 13006.375 - 13107.200: 99.1768% ( 6) 00:09:52.644 13107.200 - 13208.025: 99.2030% ( 5) 00:09:52.644 13208.025 - 13308.849: 99.2345% ( 6) 00:09:52.644 13308.849 - 13409.674: 99.2607% ( 5) 00:09:52.644 13409.674 - 13510.498: 99.2869% ( 5) 00:09:52.644 13510.498 - 13611.323: 99.3079% ( 4) 00:09:52.644 13611.323 - 13712.148: 99.3289% ( 4) 00:09:52.644 22483.889 - 22584.714: 99.3603% ( 6) 00:09:52.644 22584.714 - 22685.538: 99.4023% ( 8) 00:09:52.644 22685.538 - 22786.363: 99.4128% ( 2) 00:09:52.644 22786.363 - 22887.188: 99.4180% ( 1) 00:09:52.644 22887.188 - 22988.012: 99.4390% ( 4) 00:09:52.644 22988.012 - 23088.837: 99.4547% ( 3) 00:09:52.644 23088.837 - 23189.662: 99.4966% ( 8) 00:09:52.644 23189.662 - 23290.486: 99.5176% ( 4) 00:09:52.644 23290.486 - 23391.311: 99.5491% ( 6) 00:09:52.644 23391.311 - 23492.135: 99.5701% ( 4) 00:09:52.644 23492.135 - 23592.960: 99.6120% ( 8) 00:09:52.644 23592.960 - 23693.785: 99.6435% ( 6) 00:09:52.644 23693.785 - 23794.609: 99.6644% ( 4) 00:09:52.644 23794.609 - 23895.434: 99.6802% ( 3) 00:09:52.644 23895.434 - 23996.258: 99.6906% ( 2) 00:09:52.644 23996.258 - 24097.083: 99.7064% ( 3) 00:09:52.644 24097.083 - 24197.908: 99.7326% ( 5) 00:09:52.644 24197.908 - 24298.732: 99.7536% ( 4) 00:09:52.644 24298.732 - 24399.557: 99.7798% ( 5) 00:09:52.644 24399.557 - 24500.382: 99.8060% ( 5) 00:09:52.644 24500.382 - 24601.206: 99.8270% ( 4) 00:09:52.644 24601.206 - 24702.031: 99.8532% ( 5) 00:09:52.644 24702.031 - 24802.855: 99.8794% ( 5) 00:09:52.644 24802.855 - 24903.680: 99.9056% ( 5) 00:09:52.644 24903.680 - 25004.505: 99.9214% ( 3) 00:09:52.644 25004.505 - 25105.329: 99.9476% ( 5) 00:09:52.644 25105.329 - 25206.154: 99.9738% ( 5) 00:09:52.644 25206.154 - 25306.978: 99.9948% ( 4) 00:09:52.644 25306.978 - 25407.803: 100.0000% ( 1) 00:09:52.644 00:09:52.644 Latency histogram for PCIE (0000:00:07.0) NSID 1 from core 0: 00:09:52.644 ============================================================================== 00:09:52.644 Range in us Cumulative IO count 00:09:52.644 4965.612 - 4990.818: 0.0052% ( 1) 00:09:52.644 5066.437 - 5091.643: 0.0157% ( 2) 00:09:52.644 5091.643 - 5116.849: 0.0786% ( 12) 00:09:52.644 5116.849 - 5142.055: 0.1468% ( 13) 00:09:52.644 5142.055 - 5167.262: 0.2202% ( 14) 00:09:52.644 5167.262 - 5192.468: 0.2884% ( 13) 00:09:52.644 5192.468 - 5217.674: 0.3565% ( 13) 00:09:52.644 5217.674 - 5242.880: 0.4352% ( 15) 00:09:52.644 5242.880 - 5268.086: 0.5138% ( 15) 00:09:52.644 5268.086 - 5293.292: 0.6292% ( 22) 00:09:52.644 5293.292 - 5318.498: 0.7655% ( 26) 00:09:52.644 5318.498 - 5343.705: 0.9281% ( 31) 00:09:52.644 5343.705 - 5368.911: 1.1011% ( 33) 00:09:52.644 5368.911 - 5394.117: 1.4471% ( 66) 00:09:52.644 5394.117 - 5419.323: 2.0396% ( 113) 00:09:52.644 5419.323 - 5444.529: 2.4119% ( 71) 00:09:52.644 5444.529 - 5469.735: 2.7527% ( 65) 00:09:52.644 5469.735 - 5494.942: 3.0096% ( 49) 00:09:52.644 5494.942 - 5520.148: 3.2351% ( 43) 00:09:52.644 5520.148 - 5545.354: 3.4868% ( 48) 00:09:52.644 5545.354 - 5570.560: 3.7437% ( 49) 00:09:52.644 5570.560 - 5595.766: 3.9797% ( 45) 00:09:52.644 5595.766 - 5620.972: 4.2418% ( 50) 00:09:52.644 5620.972 - 5646.178: 4.6665% ( 81) 00:09:52.644 5646.178 - 5671.385: 5.1279% ( 88) 00:09:52.644 5671.385 - 5696.591: 5.7624% ( 121) 00:09:52.644 5696.591 - 5721.797: 6.3811% ( 118) 00:09:52.644 5721.797 - 5747.003: 7.2357% ( 163) 00:09:52.644 5747.003 - 5772.209: 8.0380% ( 153) 00:09:52.644 5772.209 - 5797.415: 9.2387% ( 229) 00:09:52.644 5797.415 - 5822.622: 10.2821% ( 199) 00:09:52.644 5822.622 - 5847.828: 11.4461% ( 222) 00:09:52.644 5847.828 - 5873.034: 12.5891% ( 218) 00:09:52.644 5873.034 - 5898.240: 13.5749% ( 188) 00:09:52.645 5898.240 - 5923.446: 14.7703% ( 228) 00:09:52.645 5923.446 - 5948.652: 16.0654% ( 247) 00:09:52.645 5948.652 - 5973.858: 17.6646% ( 305) 00:09:52.645 5973.858 - 5999.065: 19.2114% ( 295) 00:09:52.645 5999.065 - 6024.271: 20.8421% ( 311) 00:09:52.645 6024.271 - 6049.477: 22.5828% ( 332) 00:09:52.645 6049.477 - 6074.683: 24.7221% ( 408) 00:09:52.645 6074.683 - 6099.889: 26.9505% ( 425) 00:09:52.645 6099.889 - 6125.095: 29.2733% ( 443) 00:09:52.645 6125.095 - 6150.302: 31.2972% ( 386) 00:09:52.645 6150.302 - 6175.508: 33.6147% ( 442) 00:09:52.645 6175.508 - 6200.714: 36.4618% ( 543) 00:09:52.645 6200.714 - 6225.920: 39.2722% ( 536) 00:09:52.645 6225.920 - 6251.126: 42.1823% ( 555) 00:09:52.645 6251.126 - 6276.332: 44.8930% ( 517) 00:09:52.645 6276.332 - 6301.538: 46.9012% ( 383) 00:09:52.645 6301.538 - 6326.745: 49.5753% ( 510) 00:09:52.645 6326.745 - 6351.951: 51.7617% ( 417) 00:09:52.645 6351.951 - 6377.157: 53.5654% ( 344) 00:09:52.645 6377.157 - 6402.363: 55.3953% ( 349) 00:09:52.645 6402.363 - 6427.569: 57.7863% ( 456) 00:09:52.645 6427.569 - 6452.775: 60.1405% ( 449) 00:09:52.645 6452.775 - 6503.188: 64.2827% ( 790) 00:09:52.645 6503.188 - 6553.600: 66.9830% ( 515) 00:09:52.645 6553.600 - 6604.012: 69.5365% ( 487) 00:09:52.645 6604.012 - 6654.425: 71.8331% ( 438) 00:09:52.645 6654.425 - 6704.837: 73.5686% ( 331) 00:09:52.645 6704.837 - 6755.249: 76.0120% ( 466) 00:09:52.645 6755.249 - 6805.662: 77.6793% ( 318) 00:09:52.645 6805.662 - 6856.074: 79.1212% ( 275) 00:09:52.645 6856.074 - 6906.486: 80.9144% ( 342) 00:09:52.645 6906.486 - 6956.898: 82.1466% ( 235) 00:09:52.645 6956.898 - 7007.311: 83.0852% ( 179) 00:09:52.645 7007.311 - 7057.723: 83.8612% ( 148) 00:09:52.645 7057.723 - 7108.135: 84.8469% ( 188) 00:09:52.645 7108.135 - 7158.548: 85.6281% ( 149) 00:09:52.645 7158.548 - 7208.960: 86.1682% ( 103) 00:09:52.645 7208.960 - 7259.372: 86.5877% ( 80) 00:09:52.645 7259.372 - 7309.785: 86.9232% ( 64) 00:09:52.645 7309.785 - 7360.197: 87.2378% ( 60) 00:09:52.645 7360.197 - 7410.609: 87.7045% ( 89) 00:09:52.645 7410.609 - 7461.022: 88.1974% ( 94) 00:09:52.645 7461.022 - 7511.434: 88.6430% ( 85) 00:09:52.645 7511.434 - 7561.846: 88.9943% ( 67) 00:09:52.645 7561.846 - 7612.258: 89.3194% ( 62) 00:09:52.645 7612.258 - 7662.671: 89.6340% ( 60) 00:09:52.645 7662.671 - 7713.083: 90.3838% ( 143) 00:09:52.645 7713.083 - 7763.495: 90.7141% ( 63) 00:09:52.645 7763.495 - 7813.908: 90.9658% ( 48) 00:09:52.645 7813.908 - 7864.320: 91.2752% ( 59) 00:09:52.645 7864.320 - 7914.732: 92.0512% ( 148) 00:09:52.645 7914.732 - 7965.145: 92.3029% ( 48) 00:09:52.645 7965.145 - 8015.557: 92.5493% ( 47) 00:09:52.645 8015.557 - 8065.969: 92.7957% ( 47) 00:09:52.645 8065.969 - 8116.382: 92.9320% ( 26) 00:09:52.645 8116.382 - 8166.794: 93.0684% ( 26) 00:09:52.645 8166.794 - 8217.206: 93.1785% ( 21) 00:09:52.645 8217.206 - 8267.618: 93.2886% ( 21) 00:09:52.645 8267.618 - 8318.031: 93.4092% ( 23) 00:09:52.645 8318.031 - 8368.443: 93.5193% ( 21) 00:09:52.645 8368.443 - 8418.855: 93.6242% ( 20) 00:09:52.645 8418.855 - 8469.268: 93.7762% ( 29) 00:09:52.645 8469.268 - 8519.680: 93.9283% ( 29) 00:09:52.645 8519.680 - 8570.092: 94.0227% ( 18) 00:09:52.645 8570.092 - 8620.505: 94.1013% ( 15) 00:09:52.645 8620.505 - 8670.917: 94.2114% ( 21) 00:09:52.645 8670.917 - 8721.329: 94.3268% ( 22) 00:09:52.645 8721.329 - 8771.742: 94.4264% ( 19) 00:09:52.645 8771.742 - 8822.154: 94.4841% ( 11) 00:09:52.645 8822.154 - 8872.566: 94.5417% ( 11) 00:09:52.645 8872.566 - 8922.978: 94.5889% ( 9) 00:09:52.645 8922.978 - 8973.391: 94.6309% ( 8) 00:09:52.645 8973.391 - 9023.803: 94.6676% ( 7) 00:09:52.645 9023.803 - 9074.215: 94.6990% ( 6) 00:09:52.645 9074.215 - 9124.628: 94.7410% ( 8) 00:09:52.645 9124.628 - 9175.040: 94.7882% ( 9) 00:09:52.645 9175.040 - 9225.452: 94.8354% ( 9) 00:09:52.645 9225.452 - 9275.865: 94.8721% ( 7) 00:09:52.645 9275.865 - 9326.277: 94.9035% ( 6) 00:09:52.645 9326.277 - 9376.689: 94.9455% ( 8) 00:09:52.645 9376.689 - 9427.102: 94.9927% ( 9) 00:09:52.645 9427.102 - 9477.514: 95.0346% ( 8) 00:09:52.645 9477.514 - 9527.926: 95.0608% ( 5) 00:09:52.645 9527.926 - 9578.338: 95.1080% ( 9) 00:09:52.645 9578.338 - 9628.751: 95.2548% ( 28) 00:09:52.645 9628.751 - 9679.163: 95.3649% ( 21) 00:09:52.645 9679.163 - 9729.575: 95.4436% ( 15) 00:09:52.645 9729.575 - 9779.988: 95.4750% ( 6) 00:09:52.645 9779.988 - 9830.400: 95.5117% ( 7) 00:09:52.645 9830.400 - 9880.812: 95.5589% ( 9) 00:09:52.645 9880.812 - 9931.225: 95.6061% ( 9) 00:09:52.645 9931.225 - 9981.637: 95.6638% ( 11) 00:09:52.645 9981.637 - 10032.049: 95.7267% ( 12) 00:09:52.645 10032.049 - 10082.462: 95.7896% ( 12) 00:09:52.645 10082.462 - 10132.874: 95.8421% ( 10) 00:09:52.645 10132.874 - 10183.286: 95.8945% ( 10) 00:09:52.645 10183.286 - 10233.698: 95.9522% ( 11) 00:09:52.645 10233.698 - 10284.111: 95.9994% ( 9) 00:09:52.645 10284.111 - 10334.523: 96.1147% ( 22) 00:09:52.645 10334.523 - 10384.935: 96.2668% ( 29) 00:09:52.645 10384.935 - 10435.348: 96.4293% ( 31) 00:09:52.645 10435.348 - 10485.760: 96.5185% ( 17) 00:09:52.645 10485.760 - 10536.172: 96.5919% ( 14) 00:09:52.645 10536.172 - 10586.585: 96.6862% ( 18) 00:09:52.645 10586.585 - 10636.997: 96.7596% ( 14) 00:09:52.645 10636.997 - 10687.409: 96.8383% ( 15) 00:09:52.645 10687.409 - 10737.822: 96.9169% ( 15) 00:09:52.645 10737.822 - 10788.234: 97.0061% ( 17) 00:09:52.645 10788.234 - 10838.646: 97.0795% ( 14) 00:09:52.645 10838.646 - 10889.058: 97.2420% ( 31) 00:09:52.645 10889.058 - 10939.471: 97.6091% ( 70) 00:09:52.645 10939.471 - 10989.883: 97.6930% ( 16) 00:09:52.645 10989.883 - 11040.295: 97.7768% ( 16) 00:09:52.645 11040.295 - 11090.708: 97.8660% ( 17) 00:09:52.645 11090.708 - 11141.120: 97.9499% ( 16) 00:09:52.645 11141.120 - 11191.532: 98.0233% ( 14) 00:09:52.645 11191.532 - 11241.945: 98.1019% ( 15) 00:09:52.645 11241.945 - 11292.357: 98.1806% ( 15) 00:09:52.645 11292.357 - 11342.769: 98.2435% ( 12) 00:09:52.645 11342.769 - 11393.182: 98.2959% ( 10) 00:09:52.645 11393.182 - 11443.594: 98.3536% ( 11) 00:09:52.645 11443.594 - 11494.006: 98.4060% ( 10) 00:09:52.645 11494.006 - 11544.418: 98.4480% ( 8) 00:09:52.645 11544.418 - 11594.831: 98.4794% ( 6) 00:09:52.645 11594.831 - 11645.243: 98.5371% ( 11) 00:09:52.645 11645.243 - 11695.655: 98.6053% ( 13) 00:09:52.645 11695.655 - 11746.068: 98.6892% ( 16) 00:09:52.645 11746.068 - 11796.480: 98.7311% ( 8) 00:09:52.645 11796.480 - 11846.892: 98.7626% ( 6) 00:09:52.645 11846.892 - 11897.305: 98.7836% ( 4) 00:09:52.645 11897.305 - 11947.717: 98.8045% ( 4) 00:09:52.645 11947.717 - 11998.129: 98.8203% ( 3) 00:09:52.645 11998.129 - 12048.542: 98.8360% ( 3) 00:09:52.645 12048.542 - 12098.954: 98.8570% ( 4) 00:09:52.645 12098.954 - 12149.366: 98.8727% ( 3) 00:09:52.645 12149.366 - 12199.778: 98.8937% ( 4) 00:09:52.645 12199.778 - 12250.191: 98.9094% ( 3) 00:09:52.645 12250.191 - 12300.603: 98.9251% ( 3) 00:09:52.645 12300.603 - 12351.015: 98.9461% ( 4) 00:09:52.645 12351.015 - 12401.428: 98.9618% ( 3) 00:09:52.645 12401.428 - 12451.840: 98.9828% ( 4) 00:09:52.645 12451.840 - 12502.252: 98.9985% ( 3) 00:09:52.645 12502.252 - 12552.665: 99.0143% ( 3) 00:09:52.645 12552.665 - 12603.077: 99.0352% ( 4) 00:09:52.645 12603.077 - 12653.489: 99.0510% ( 3) 00:09:52.645 12653.489 - 12703.902: 99.0667% ( 3) 00:09:52.645 12703.902 - 12754.314: 99.0824% ( 3) 00:09:52.645 12754.314 - 12804.726: 99.1034% ( 4) 00:09:52.645 12804.726 - 12855.138: 99.1191% ( 3) 00:09:52.645 12855.138 - 12905.551: 99.1401% ( 4) 00:09:52.645 12905.551 - 13006.375: 99.1716% ( 6) 00:09:52.645 13006.375 - 13107.200: 99.2083% ( 7) 00:09:52.645 13107.200 - 13208.025: 99.2397% ( 6) 00:09:52.645 13208.025 - 13308.849: 99.2764% ( 7) 00:09:52.645 13308.849 - 13409.674: 99.3131% ( 7) 00:09:52.645 13409.674 - 13510.498: 99.3289% ( 3) 00:09:52.645 21979.766 - 22080.591: 99.3393% ( 2) 00:09:52.645 22080.591 - 22181.415: 99.3865% ( 9) 00:09:52.645 22181.415 - 22282.240: 99.4390% ( 10) 00:09:52.645 22282.240 - 22383.065: 99.4809% ( 8) 00:09:52.645 22383.065 - 22483.889: 99.5071% ( 5) 00:09:52.645 22483.889 - 22584.714: 99.5333% ( 5) 00:09:52.645 22584.714 - 22685.538: 99.5596% ( 5) 00:09:52.645 22685.538 - 22786.363: 99.5753% ( 3) 00:09:52.645 22786.363 - 22887.188: 99.6015% ( 5) 00:09:52.645 22887.188 - 22988.012: 99.6225% ( 4) 00:09:52.645 22988.012 - 23088.837: 99.6487% ( 5) 00:09:52.645 23088.837 - 23189.662: 99.6697% ( 4) 00:09:52.645 23189.662 - 23290.486: 99.6906% ( 4) 00:09:52.645 23290.486 - 23391.311: 99.7169% ( 5) 00:09:52.645 23391.311 - 23492.135: 99.7378% ( 4) 00:09:52.645 23492.135 - 23592.960: 99.7536% ( 3) 00:09:52.645 23592.960 - 23693.785: 99.7693% ( 3) 00:09:52.645 23693.785 - 23794.609: 99.7850% ( 3) 00:09:52.645 23794.609 - 23895.434: 99.8008% ( 3) 00:09:52.646 23895.434 - 23996.258: 99.8165% ( 3) 00:09:52.646 23996.258 - 24097.083: 99.8322% ( 3) 00:09:52.646 24097.083 - 24197.908: 99.8479% ( 3) 00:09:52.646 24197.908 - 24298.732: 99.8637% ( 3) 00:09:52.646 24298.732 - 24399.557: 99.8846% ( 4) 00:09:52.646 24399.557 - 24500.382: 99.9004% ( 3) 00:09:52.646 24500.382 - 24601.206: 99.9161% ( 3) 00:09:52.646 24601.206 - 24702.031: 99.9318% ( 3) 00:09:52.646 24702.031 - 24802.855: 99.9476% ( 3) 00:09:52.646 24802.855 - 24903.680: 99.9633% ( 3) 00:09:52.646 24903.680 - 25004.505: 99.9790% ( 3) 00:09:52.646 25004.505 - 25105.329: 100.0000% ( 4) 00:09:52.646 00:09:52.646 Latency histogram for PCIE (0000:00:09.0) NSID 1 from core 0: 00:09:52.646 ============================================================================== 00:09:52.646 Range in us Cumulative IO count 00:09:52.646 4713.551 - 4738.757: 0.0052% ( 1) 00:09:52.646 4763.963 - 4789.169: 0.0105% ( 1) 00:09:52.646 4789.169 - 4814.375: 0.0157% ( 1) 00:09:52.646 4839.582 - 4864.788: 0.0419% ( 5) 00:09:52.646 4864.788 - 4889.994: 0.0577% ( 3) 00:09:52.646 4889.994 - 4915.200: 0.0839% ( 5) 00:09:52.646 4915.200 - 4940.406: 0.1101% ( 5) 00:09:52.646 4940.406 - 4965.612: 0.1311% ( 4) 00:09:52.646 4965.612 - 4990.818: 0.1468% ( 3) 00:09:52.646 4990.818 - 5016.025: 0.1730% ( 5) 00:09:52.646 5016.025 - 5041.231: 0.2150% ( 8) 00:09:52.646 5041.231 - 5066.437: 0.2464% ( 6) 00:09:52.646 5066.437 - 5091.643: 0.2936% ( 9) 00:09:52.646 5091.643 - 5116.849: 0.3513% ( 11) 00:09:52.646 5116.849 - 5142.055: 0.3985% ( 9) 00:09:52.646 5142.055 - 5167.262: 0.4614% ( 12) 00:09:52.646 5167.262 - 5192.468: 0.6030% ( 27) 00:09:52.646 5192.468 - 5217.674: 0.6869% ( 16) 00:09:52.646 5217.674 - 5242.880: 0.7970% ( 21) 00:09:52.646 5242.880 - 5268.086: 0.9490% ( 29) 00:09:52.646 5268.086 - 5293.292: 1.1378% ( 36) 00:09:52.646 5293.292 - 5318.498: 1.3266% ( 36) 00:09:52.646 5318.498 - 5343.705: 1.5782% ( 48) 00:09:52.646 5343.705 - 5368.911: 1.8666% ( 55) 00:09:52.646 5368.911 - 5394.117: 2.1655% ( 57) 00:09:52.646 5394.117 - 5419.323: 2.4276% ( 50) 00:09:52.646 5419.323 - 5444.529: 2.7055% ( 53) 00:09:52.646 5444.529 - 5469.735: 3.0096% ( 58) 00:09:52.646 5469.735 - 5494.942: 3.4239% ( 79) 00:09:52.646 5494.942 - 5520.148: 3.8538% ( 82) 00:09:52.646 5520.148 - 5545.354: 4.2366% ( 73) 00:09:52.646 5545.354 - 5570.560: 4.6875% ( 86) 00:09:52.646 5570.560 - 5595.766: 5.2905% ( 115) 00:09:52.646 5595.766 - 5620.972: 5.7099% ( 80) 00:09:52.646 5620.972 - 5646.178: 6.2238% ( 98) 00:09:52.646 5646.178 - 5671.385: 7.1676% ( 180) 00:09:52.646 5671.385 - 5696.591: 7.7968% ( 120) 00:09:52.646 5696.591 - 5721.797: 8.8716% ( 205) 00:09:52.646 5721.797 - 5747.003: 10.0042% ( 216) 00:09:52.646 5747.003 - 5772.209: 10.9794% ( 186) 00:09:52.646 5772.209 - 5797.415: 11.8970% ( 175) 00:09:52.646 5797.415 - 5822.622: 12.7622% ( 165) 00:09:52.646 5822.622 - 5847.828: 13.7427% ( 187) 00:09:52.646 5847.828 - 5873.034: 14.9119% ( 223) 00:09:52.646 5873.034 - 5898.240: 16.1860% ( 243) 00:09:52.646 5898.240 - 5923.446: 17.4444% ( 240) 00:09:52.646 5923.446 - 5948.652: 18.9283% ( 283) 00:09:52.646 5948.652 - 5973.858: 20.3230% ( 266) 00:09:52.646 5973.858 - 5999.065: 21.6495% ( 253) 00:09:52.646 5999.065 - 6024.271: 23.2540% ( 306) 00:09:52.646 6024.271 - 6049.477: 24.7431% ( 284) 00:09:52.646 6049.477 - 6074.683: 26.5363% ( 342) 00:09:52.646 6074.683 - 6099.889: 28.3295% ( 342) 00:09:52.646 6099.889 - 6125.095: 30.0336% ( 325) 00:09:52.646 6125.095 - 6150.302: 32.5556% ( 481) 00:09:52.646 6150.302 - 6175.508: 35.1353% ( 492) 00:09:52.646 6175.508 - 6200.714: 37.5577% ( 462) 00:09:52.646 6200.714 - 6225.920: 39.9906% ( 464) 00:09:52.646 6225.920 - 6251.126: 42.7538% ( 527) 00:09:52.646 6251.126 - 6276.332: 44.9717% ( 423) 00:09:52.646 6276.332 - 6301.538: 47.1162% ( 409) 00:09:52.646 6301.538 - 6326.745: 49.5910% ( 472) 00:09:52.646 6326.745 - 6351.951: 51.2898% ( 324) 00:09:52.646 6351.951 - 6377.157: 52.7685% ( 282) 00:09:52.646 6377.157 - 6402.363: 54.6875% ( 366) 00:09:52.646 6402.363 - 6427.569: 56.9893% ( 439) 00:09:52.646 6427.569 - 6452.775: 59.3698% ( 454) 00:09:52.646 6452.775 - 6503.188: 62.6835% ( 632) 00:09:52.646 6503.188 - 6553.600: 64.9276% ( 428) 00:09:52.646 6553.600 - 6604.012: 67.3500% ( 462) 00:09:52.646 6604.012 - 6654.425: 69.8406% ( 475) 00:09:52.646 6654.425 - 6704.837: 72.3049% ( 470) 00:09:52.646 6704.837 - 6755.249: 74.3289% ( 386) 00:09:52.646 6755.249 - 6805.662: 76.1063% ( 339) 00:09:52.646 6805.662 - 6856.074: 78.2508% ( 409) 00:09:52.646 6856.074 - 6906.486: 80.1751% ( 367) 00:09:52.646 6906.486 - 6956.898: 81.4388% ( 241) 00:09:52.646 6956.898 - 7007.311: 82.4507% ( 193) 00:09:52.646 7007.311 - 7057.723: 83.6095% ( 221) 00:09:52.646 7057.723 - 7108.135: 84.6529% ( 199) 00:09:52.646 7108.135 - 7158.548: 85.3503% ( 133) 00:09:52.646 7158.548 - 7208.960: 86.0424% ( 132) 00:09:52.646 7208.960 - 7259.372: 86.5247% ( 92) 00:09:52.646 7259.372 - 7309.785: 87.0701% ( 104) 00:09:52.646 7309.785 - 7360.197: 87.6888% ( 118) 00:09:52.646 7360.197 - 7410.609: 88.0663% ( 72) 00:09:52.646 7410.609 - 7461.022: 88.4543% ( 74) 00:09:52.646 7461.022 - 7511.434: 88.8213% ( 70) 00:09:52.646 7511.434 - 7561.846: 89.1254% ( 58) 00:09:52.646 7561.846 - 7612.258: 89.4348% ( 59) 00:09:52.646 7612.258 - 7662.671: 89.8123% ( 72) 00:09:52.646 7662.671 - 7713.083: 90.2055% ( 75) 00:09:52.646 7713.083 - 7763.495: 90.4887% ( 54) 00:09:52.646 7763.495 - 7813.908: 90.7036% ( 41) 00:09:52.646 7813.908 - 7864.320: 90.9501% ( 47) 00:09:52.646 7864.320 - 7914.732: 91.2699% ( 61) 00:09:52.646 7914.732 - 7965.145: 91.6107% ( 65) 00:09:52.646 7965.145 - 8015.557: 92.0564% ( 85) 00:09:52.646 8015.557 - 8065.969: 92.3658% ( 59) 00:09:52.646 8065.969 - 8116.382: 92.5021% ( 26) 00:09:52.646 8116.382 - 8166.794: 92.6437% ( 27) 00:09:52.646 8166.794 - 8217.206: 92.7590% ( 22) 00:09:52.646 8217.206 - 8267.618: 92.8586% ( 19) 00:09:52.646 8267.618 - 8318.031: 92.9530% ( 18) 00:09:52.646 8318.031 - 8368.443: 93.0422% ( 17) 00:09:52.646 8368.443 - 8418.855: 93.1208% ( 15) 00:09:52.646 8418.855 - 8469.268: 93.1942% ( 14) 00:09:52.646 8469.268 - 8519.680: 93.2676% ( 14) 00:09:52.646 8519.680 - 8570.092: 93.3568% ( 17) 00:09:52.646 8570.092 - 8620.505: 93.4197% ( 12) 00:09:52.646 8620.505 - 8670.917: 93.4931% ( 14) 00:09:52.646 8670.917 - 8721.329: 93.5560% ( 12) 00:09:52.646 8721.329 - 8771.742: 93.6084% ( 10) 00:09:52.646 8771.742 - 8822.154: 93.6871% ( 15) 00:09:52.646 8822.154 - 8872.566: 93.7552% ( 13) 00:09:52.646 8872.566 - 8922.978: 93.8182% ( 12) 00:09:52.646 8922.978 - 8973.391: 93.8811% ( 12) 00:09:52.646 8973.391 - 9023.803: 93.9440% ( 12) 00:09:52.646 9023.803 - 9074.215: 94.0069% ( 12) 00:09:52.646 9074.215 - 9124.628: 94.1065% ( 19) 00:09:52.646 9124.628 - 9175.040: 94.3635% ( 49) 00:09:52.646 9175.040 - 9225.452: 94.6047% ( 46) 00:09:52.646 9225.452 - 9275.865: 94.6728% ( 13) 00:09:52.646 9275.865 - 9326.277: 94.7305% ( 11) 00:09:52.646 9326.277 - 9376.689: 94.7987% ( 13) 00:09:52.646 9376.689 - 9427.102: 94.8616% ( 12) 00:09:52.646 9427.102 - 9477.514: 94.9193% ( 11) 00:09:52.646 9477.514 - 9527.926: 94.9874% ( 13) 00:09:52.646 9527.926 - 9578.338: 95.0294% ( 8) 00:09:52.646 9578.338 - 9628.751: 95.0766% ( 9) 00:09:52.646 9628.751 - 9679.163: 95.1395% ( 12) 00:09:52.646 9679.163 - 9729.575: 95.1867% ( 9) 00:09:52.646 9729.575 - 9779.988: 95.2915% ( 20) 00:09:52.646 9779.988 - 9830.400: 95.4174% ( 24) 00:09:52.646 9830.400 - 9880.812: 95.5327% ( 22) 00:09:52.646 9880.812 - 9931.225: 95.6795% ( 28) 00:09:52.646 9931.225 - 9981.637: 95.8421% ( 31) 00:09:52.646 9981.637 - 10032.049: 96.0203% ( 34) 00:09:52.646 10032.049 - 10082.462: 96.1881% ( 32) 00:09:52.646 10082.462 - 10132.874: 96.3769% ( 36) 00:09:52.646 10132.874 - 10183.286: 96.5342% ( 30) 00:09:52.646 10183.286 - 10233.698: 97.0323% ( 95) 00:09:52.646 10233.698 - 10284.111: 97.1214% ( 17) 00:09:52.646 10284.111 - 10334.523: 97.2001% ( 15) 00:09:52.646 10334.523 - 10384.935: 97.2997% ( 19) 00:09:52.646 10384.935 - 10435.348: 97.3784% ( 15) 00:09:52.646 10435.348 - 10485.760: 97.4622% ( 16) 00:09:52.646 10485.760 - 10536.172: 97.5357% ( 14) 00:09:52.646 10536.172 - 10586.585: 97.5881% ( 10) 00:09:52.646 10586.585 - 10636.997: 97.6405% ( 10) 00:09:52.646 10636.997 - 10687.409: 97.6930% ( 10) 00:09:52.647 10687.409 - 10737.822: 97.7611% ( 13) 00:09:52.647 10737.822 - 10788.234: 97.8450% ( 16) 00:09:52.647 10788.234 - 10838.646: 97.9184% ( 14) 00:09:52.647 10838.646 - 10889.058: 97.9813% ( 12) 00:09:52.647 10889.058 - 10939.471: 98.0390% ( 11) 00:09:52.647 10939.471 - 10989.883: 98.0914% ( 10) 00:09:52.647 10989.883 - 11040.295: 98.1386% ( 9) 00:09:52.647 11040.295 - 11090.708: 98.1806% ( 8) 00:09:52.647 11090.708 - 11141.120: 98.2225% ( 8) 00:09:52.647 11141.120 - 11191.532: 98.2697% ( 9) 00:09:52.647 11191.532 - 11241.945: 98.3064% ( 7) 00:09:52.647 11241.945 - 11292.357: 98.3431% ( 7) 00:09:52.647 11292.357 - 11342.769: 98.3798% ( 7) 00:09:52.647 11342.769 - 11393.182: 98.4113% ( 6) 00:09:52.647 11393.182 - 11443.594: 98.4480% ( 7) 00:09:52.647 11443.594 - 11494.006: 98.4847% ( 7) 00:09:52.647 11494.006 - 11544.418: 98.5214% ( 7) 00:09:52.647 11544.418 - 11594.831: 98.5633% ( 8) 00:09:52.647 11594.831 - 11645.243: 98.5948% ( 6) 00:09:52.647 11645.243 - 11695.655: 98.6105% ( 3) 00:09:52.647 11695.655 - 11746.068: 98.6315% ( 4) 00:09:52.647 11746.068 - 11796.480: 98.6472% ( 3) 00:09:52.647 11796.480 - 11846.892: 98.6577% ( 2) 00:09:52.647 12149.366 - 12199.778: 98.6787% ( 4) 00:09:52.647 12199.778 - 12250.191: 98.7206% ( 8) 00:09:52.647 12250.191 - 12300.603: 98.7573% ( 7) 00:09:52.647 12300.603 - 12351.015: 98.7836% ( 5) 00:09:52.647 12351.015 - 12401.428: 98.8098% ( 5) 00:09:52.647 12401.428 - 12451.840: 98.8255% ( 3) 00:09:52.647 12451.840 - 12502.252: 98.8360% ( 2) 00:09:52.647 12502.252 - 12552.665: 98.8465% ( 2) 00:09:52.647 12552.665 - 12603.077: 98.8622% ( 3) 00:09:52.647 12603.077 - 12653.489: 98.8727% ( 2) 00:09:52.647 12653.489 - 12703.902: 98.8832% ( 2) 00:09:52.647 12703.902 - 12754.314: 98.8884% ( 1) 00:09:52.647 12754.314 - 12804.726: 98.9042% ( 3) 00:09:52.647 12804.726 - 12855.138: 98.9199% ( 3) 00:09:52.647 12855.138 - 12905.551: 98.9356% ( 3) 00:09:52.647 12905.551 - 13006.375: 98.9671% ( 6) 00:09:52.647 13006.375 - 13107.200: 98.9985% ( 6) 00:09:52.647 13107.200 - 13208.025: 99.0352% ( 7) 00:09:52.647 13208.025 - 13308.849: 99.0667% ( 6) 00:09:52.647 13308.849 - 13409.674: 99.1034% ( 7) 00:09:52.647 13409.674 - 13510.498: 99.1401% ( 7) 00:09:52.647 13510.498 - 13611.323: 99.1768% ( 7) 00:09:52.647 13611.323 - 13712.148: 99.2083% ( 6) 00:09:52.647 13712.148 - 13812.972: 99.2450% ( 7) 00:09:52.647 13812.972 - 13913.797: 99.2764% ( 6) 00:09:52.647 13913.797 - 14014.622: 99.3131% ( 7) 00:09:52.647 14014.622 - 14115.446: 99.3289% ( 3) 00:09:52.647 22181.415 - 22282.240: 99.3446% ( 3) 00:09:52.647 22282.240 - 22383.065: 99.3918% ( 9) 00:09:52.647 22383.065 - 22483.889: 99.4337% ( 8) 00:09:52.647 22483.889 - 22584.714: 99.4704% ( 7) 00:09:52.647 22584.714 - 22685.538: 99.5124% ( 8) 00:09:52.647 22685.538 - 22786.363: 99.5701% ( 11) 00:09:52.647 22786.363 - 22887.188: 99.5963% ( 5) 00:09:52.647 22887.188 - 22988.012: 99.6120% ( 3) 00:09:52.647 22988.012 - 23088.837: 99.6225% ( 2) 00:09:52.647 23088.837 - 23189.662: 99.6382% ( 3) 00:09:52.647 23189.662 - 23290.486: 99.6539% ( 3) 00:09:52.647 23290.486 - 23391.311: 99.6644% ( 2) 00:09:52.647 23391.311 - 23492.135: 99.6802% ( 3) 00:09:52.647 23492.135 - 23592.960: 99.6959% ( 3) 00:09:52.647 23592.960 - 23693.785: 99.7064% ( 2) 00:09:52.647 23693.785 - 23794.609: 99.7221% ( 3) 00:09:52.647 23794.609 - 23895.434: 99.7378% ( 3) 00:09:52.647 23895.434 - 23996.258: 99.7536% ( 3) 00:09:52.647 23996.258 - 24097.083: 99.7693% ( 3) 00:09:52.647 24097.083 - 24197.908: 99.7903% ( 4) 00:09:52.647 24197.908 - 24298.732: 99.8112% ( 4) 00:09:52.647 24298.732 - 24399.557: 99.8322% ( 4) 00:09:52.647 24399.557 - 24500.382: 99.8479% ( 3) 00:09:52.647 24500.382 - 24601.206: 99.8689% ( 4) 00:09:52.647 24601.206 - 24702.031: 99.8899% ( 4) 00:09:52.647 24702.031 - 24802.855: 99.9109% ( 4) 00:09:52.647 24802.855 - 24903.680: 99.9318% ( 4) 00:09:52.647 24903.680 - 25004.505: 99.9528% ( 4) 00:09:52.647 25004.505 - 25105.329: 99.9738% ( 4) 00:09:52.647 25105.329 - 25206.154: 99.9948% ( 4) 00:09:52.647 25206.154 - 25306.978: 100.0000% ( 1) 00:09:52.647 00:09:52.647 Latency histogram for PCIE (0000:00:08.0) NSID 1 from core 0: 00:09:52.647 ============================================================================== 00:09:52.647 Range in us Cumulative IO count 00:09:52.647 4663.138 - 4688.345: 0.0052% ( 1) 00:09:52.647 4789.169 - 4814.375: 0.0105% ( 1) 00:09:52.647 4889.994 - 4915.200: 0.0157% ( 1) 00:09:52.647 4990.818 - 5016.025: 0.0210% ( 1) 00:09:52.647 5016.025 - 5041.231: 0.0367% ( 3) 00:09:52.647 5041.231 - 5066.437: 0.0839% ( 9) 00:09:52.647 5066.437 - 5091.643: 0.1206% ( 7) 00:09:52.647 5091.643 - 5116.849: 0.1678% ( 9) 00:09:52.647 5116.849 - 5142.055: 0.2464% ( 15) 00:09:52.647 5142.055 - 5167.262: 0.3303% ( 16) 00:09:52.647 5167.262 - 5192.468: 0.3880% ( 11) 00:09:52.647 5192.468 - 5217.674: 0.4614% ( 14) 00:09:52.647 5217.674 - 5242.880: 0.5453% ( 16) 00:09:52.647 5242.880 - 5268.086: 0.6449% ( 19) 00:09:52.647 5268.086 - 5293.292: 0.7655% ( 23) 00:09:52.647 5293.292 - 5318.498: 0.9018% ( 26) 00:09:52.647 5318.498 - 5343.705: 1.0801% ( 34) 00:09:52.647 5343.705 - 5368.911: 1.3475% ( 51) 00:09:52.647 5368.911 - 5394.117: 1.5940% ( 47) 00:09:52.647 5394.117 - 5419.323: 1.9610% ( 70) 00:09:52.647 5419.323 - 5444.529: 2.2441% ( 54) 00:09:52.647 5444.529 - 5469.735: 2.4906% ( 47) 00:09:52.647 5469.735 - 5494.942: 2.7003% ( 40) 00:09:52.647 5494.942 - 5520.148: 2.9939% ( 56) 00:09:52.647 5520.148 - 5545.354: 3.2875% ( 56) 00:09:52.647 5545.354 - 5570.560: 3.5969% ( 59) 00:09:52.647 5570.560 - 5595.766: 3.9430% ( 66) 00:09:52.647 5595.766 - 5620.972: 4.3939% ( 86) 00:09:52.647 5620.972 - 5646.178: 4.9077% ( 98) 00:09:52.647 5646.178 - 5671.385: 5.2852% ( 72) 00:09:52.647 5671.385 - 5696.591: 5.7886% ( 96) 00:09:52.647 5696.591 - 5721.797: 6.5279% ( 141) 00:09:52.647 5721.797 - 5747.003: 7.1938% ( 127) 00:09:52.647 5747.003 - 5772.209: 8.1586% ( 184) 00:09:52.647 5772.209 - 5797.415: 9.1023% ( 180) 00:09:52.647 5797.415 - 5822.622: 10.2821% ( 225) 00:09:52.647 5822.622 - 5847.828: 11.2836% ( 191) 00:09:52.647 5847.828 - 5873.034: 12.2798% ( 190) 00:09:52.647 5873.034 - 5898.240: 13.5644% ( 245) 00:09:52.647 5898.240 - 5923.446: 14.9801% ( 270) 00:09:52.647 5923.446 - 5948.652: 16.7628% ( 340) 00:09:52.647 5948.652 - 5973.858: 18.3725% ( 307) 00:09:52.647 5973.858 - 5999.065: 19.8721% ( 286) 00:09:52.647 5999.065 - 6024.271: 21.3140% ( 275) 00:09:52.647 6024.271 - 6049.477: 23.3693% ( 392) 00:09:52.647 6049.477 - 6074.683: 24.9318% ( 298) 00:09:52.647 6074.683 - 6099.889: 26.5573% ( 310) 00:09:52.647 6099.889 - 6125.095: 29.2156% ( 507) 00:09:52.647 6125.095 - 6150.302: 31.9369% ( 519) 00:09:52.647 6150.302 - 6175.508: 34.4012% ( 470) 00:09:52.647 6175.508 - 6200.714: 36.7922% ( 456) 00:09:52.647 6200.714 - 6225.920: 39.4138% ( 500) 00:09:52.647 6225.920 - 6251.126: 42.7380% ( 634) 00:09:52.647 6251.126 - 6276.332: 45.1552% ( 461) 00:09:52.647 6276.332 - 6301.538: 48.0705% ( 556) 00:09:52.647 6301.538 - 6326.745: 50.5401% ( 471) 00:09:52.647 6326.745 - 6351.951: 53.0096% ( 471) 00:09:52.647 6351.951 - 6377.157: 55.1909% ( 416) 00:09:52.647 6377.157 - 6402.363: 57.5713% ( 454) 00:09:52.647 6402.363 - 6427.569: 59.5952% ( 386) 00:09:52.647 6427.569 - 6452.775: 61.4880% ( 361) 00:09:52.647 6452.775 - 6503.188: 64.6026% ( 594) 00:09:52.647 6503.188 - 6553.600: 67.2032% ( 496) 00:09:52.647 6553.600 - 6604.012: 69.7515% ( 486) 00:09:52.647 6604.012 - 6654.425: 72.3259% ( 491) 00:09:52.647 6654.425 - 6704.837: 74.4757% ( 410) 00:09:52.647 6704.837 - 6755.249: 76.2427% ( 337) 00:09:52.647 6755.249 - 6805.662: 78.0726% ( 349) 00:09:52.647 6805.662 - 6856.074: 80.0860% ( 384) 00:09:52.647 6856.074 - 6906.486: 81.7638% ( 320) 00:09:52.648 6906.486 - 6956.898: 83.0380% ( 243) 00:09:52.648 6956.898 - 7007.311: 83.8716% ( 159) 00:09:52.648 7007.311 - 7057.723: 84.6005% ( 139) 00:09:52.648 7057.723 - 7108.135: 85.2926% ( 132) 00:09:52.648 7108.135 - 7158.548: 86.1577% ( 165) 00:09:52.648 7158.548 - 7208.960: 86.9599% ( 153) 00:09:52.648 7208.960 - 7259.372: 87.5786% ( 118) 00:09:52.648 7259.372 - 7309.785: 88.0296% ( 86) 00:09:52.648 7309.785 - 7360.197: 88.4700% ( 84) 00:09:52.648 7360.197 - 7410.609: 88.9681% ( 95) 00:09:52.648 7410.609 - 7461.022: 89.2827% ( 60) 00:09:52.648 7461.022 - 7511.434: 89.5554% ( 52) 00:09:52.648 7511.434 - 7561.846: 89.9696% ( 79) 00:09:52.648 7561.846 - 7612.258: 90.3314% ( 69) 00:09:52.648 7612.258 - 7662.671: 90.5726% ( 46) 00:09:52.648 7662.671 - 7713.083: 90.9029% ( 63) 00:09:52.648 7713.083 - 7763.495: 91.0917% ( 36) 00:09:52.648 7763.495 - 7813.908: 91.2647% ( 33) 00:09:52.648 7813.908 - 7864.320: 91.3748% ( 21) 00:09:52.648 7864.320 - 7914.732: 91.5268% ( 29) 00:09:52.648 7914.732 - 7965.145: 91.6474% ( 23) 00:09:52.648 7965.145 - 8015.557: 91.7733% ( 24) 00:09:52.648 8015.557 - 8065.969: 91.8939% ( 23) 00:09:52.648 8065.969 - 8116.382: 92.0302% ( 26) 00:09:52.648 8116.382 - 8166.794: 92.2504% ( 42) 00:09:52.648 8166.794 - 8217.206: 92.4864% ( 45) 00:09:52.648 8217.206 - 8267.618: 92.7643% ( 53) 00:09:52.648 8267.618 - 8318.031: 92.8377% ( 14) 00:09:52.648 8318.031 - 8368.443: 92.9268% ( 17) 00:09:52.648 8368.443 - 8418.855: 92.9950% ( 13) 00:09:52.648 8418.855 - 8469.268: 93.0998% ( 20) 00:09:52.648 8469.268 - 8519.680: 93.3515% ( 48) 00:09:52.648 8519.680 - 8570.092: 93.4459% ( 18) 00:09:52.648 8570.092 - 8620.505: 93.5508% ( 20) 00:09:52.648 8620.505 - 8670.917: 93.6242% ( 14) 00:09:52.648 8670.917 - 8721.329: 93.6871% ( 12) 00:09:52.648 8721.329 - 8771.742: 93.7552% ( 13) 00:09:52.648 8771.742 - 8822.154: 93.8234% ( 13) 00:09:52.648 8822.154 - 8872.566: 93.8916% ( 13) 00:09:52.648 8872.566 - 8922.978: 93.9597% ( 13) 00:09:52.648 8922.978 - 8973.391: 94.0331% ( 14) 00:09:52.648 8973.391 - 9023.803: 94.0856% ( 10) 00:09:52.648 9023.803 - 9074.215: 94.2271% ( 27) 00:09:52.648 9074.215 - 9124.628: 94.4159% ( 36) 00:09:52.648 9124.628 - 9175.040: 94.5417% ( 24) 00:09:52.648 9175.040 - 9225.452: 94.6256% ( 16) 00:09:52.648 9225.452 - 9275.865: 94.7095% ( 16) 00:09:52.648 9275.865 - 9326.277: 94.7777% ( 13) 00:09:52.648 9326.277 - 9376.689: 94.8458% ( 13) 00:09:52.648 9376.689 - 9427.102: 94.9088% ( 12) 00:09:52.648 9427.102 - 9477.514: 94.9927% ( 16) 00:09:52.648 9477.514 - 9527.926: 95.1028% ( 21) 00:09:52.648 9527.926 - 9578.338: 95.2024% ( 19) 00:09:52.648 9578.338 - 9628.751: 95.3020% ( 19) 00:09:52.648 9628.751 - 9679.163: 95.3807% ( 15) 00:09:52.648 9679.163 - 9729.575: 95.4488% ( 13) 00:09:52.648 9729.575 - 9779.988: 95.5484% ( 19) 00:09:52.648 9779.988 - 9830.400: 95.6166% ( 13) 00:09:52.648 9830.400 - 9880.812: 95.7057% ( 17) 00:09:52.648 9880.812 - 9931.225: 95.7792% ( 14) 00:09:52.648 9931.225 - 9981.637: 95.9469% ( 32) 00:09:52.648 9981.637 - 10032.049: 96.1514% ( 39) 00:09:52.648 10032.049 - 10082.462: 96.2406% ( 17) 00:09:52.648 10082.462 - 10132.874: 96.3087% ( 13) 00:09:52.648 10132.874 - 10183.286: 96.4031% ( 18) 00:09:52.648 10183.286 - 10233.698: 96.4922% ( 17) 00:09:52.648 10233.698 - 10284.111: 96.5866% ( 18) 00:09:52.648 10284.111 - 10334.523: 96.6810% ( 18) 00:09:52.648 10334.523 - 10384.935: 96.7492% ( 13) 00:09:52.648 10384.935 - 10435.348: 96.8435% ( 18) 00:09:52.648 10435.348 - 10485.760: 96.9484% ( 20) 00:09:52.648 10485.760 - 10536.172: 97.0218% ( 14) 00:09:52.648 10536.172 - 10586.585: 97.0952% ( 14) 00:09:52.648 10586.585 - 10636.997: 97.1739% ( 15) 00:09:52.648 10636.997 - 10687.409: 97.2420% ( 13) 00:09:52.648 10687.409 - 10737.822: 97.3312% ( 17) 00:09:52.648 10737.822 - 10788.234: 97.3941% ( 12) 00:09:52.648 10788.234 - 10838.646: 97.4570% ( 12) 00:09:52.648 10838.646 - 10889.058: 97.5461% ( 17) 00:09:52.648 10889.058 - 10939.471: 97.6143% ( 13) 00:09:52.648 10939.471 - 10989.883: 97.7034% ( 17) 00:09:52.648 10989.883 - 11040.295: 97.7716% ( 13) 00:09:52.648 11040.295 - 11090.708: 97.8503% ( 15) 00:09:52.648 11090.708 - 11141.120: 97.9237% ( 14) 00:09:52.648 11141.120 - 11191.532: 97.9866% ( 12) 00:09:52.648 11191.532 - 11241.945: 98.0495% ( 12) 00:09:52.648 11241.945 - 11292.357: 98.0967% ( 9) 00:09:52.648 11292.357 - 11342.769: 98.1491% ( 10) 00:09:52.648 11342.769 - 11393.182: 98.1911% ( 8) 00:09:52.648 11393.182 - 11443.594: 98.2540% ( 12) 00:09:52.648 11443.594 - 11494.006: 98.3064% ( 10) 00:09:52.648 11494.006 - 11544.418: 98.3484% ( 8) 00:09:52.648 11544.418 - 11594.831: 98.4008% ( 10) 00:09:52.648 11594.831 - 11645.243: 98.4899% ( 17) 00:09:52.648 11645.243 - 11695.655: 98.5214% ( 6) 00:09:52.648 11695.655 - 11746.068: 98.5529% ( 6) 00:09:52.648 11746.068 - 11796.480: 98.5791% ( 5) 00:09:52.648 11796.480 - 11846.892: 98.6105% ( 6) 00:09:52.648 11846.892 - 11897.305: 98.6210% ( 2) 00:09:52.648 11947.717 - 11998.129: 98.6315% ( 2) 00:09:52.648 11998.129 - 12048.542: 98.6420% ( 2) 00:09:52.648 12048.542 - 12098.954: 98.6472% ( 1) 00:09:52.648 12098.954 - 12149.366: 98.6525% ( 1) 00:09:52.648 12149.366 - 12199.778: 98.6577% ( 1) 00:09:52.648 12300.603 - 12351.015: 98.6630% ( 1) 00:09:52.648 12351.015 - 12401.428: 98.6787% ( 3) 00:09:52.648 12401.428 - 12451.840: 98.6997% ( 4) 00:09:52.648 12451.840 - 12502.252: 98.7154% ( 3) 00:09:52.648 12502.252 - 12552.665: 98.7364% ( 4) 00:09:52.648 12552.665 - 12603.077: 98.7573% ( 4) 00:09:52.648 12603.077 - 12653.489: 98.7783% ( 4) 00:09:52.648 12653.489 - 12703.902: 98.7993% ( 4) 00:09:52.648 12703.902 - 12754.314: 98.8203% ( 4) 00:09:52.648 12754.314 - 12804.726: 98.8360% ( 3) 00:09:52.648 12804.726 - 12855.138: 98.8570% ( 4) 00:09:52.648 12855.138 - 12905.551: 98.8779% ( 4) 00:09:52.648 12905.551 - 13006.375: 98.9199% ( 8) 00:09:52.648 13006.375 - 13107.200: 98.9566% ( 7) 00:09:52.648 13107.200 - 13208.025: 98.9985% ( 8) 00:09:52.648 13208.025 - 13308.849: 99.0405% ( 8) 00:09:52.648 13308.849 - 13409.674: 99.0824% ( 8) 00:09:52.648 13409.674 - 13510.498: 99.1191% ( 7) 00:09:52.648 13510.498 - 13611.323: 99.1611% ( 8) 00:09:52.648 13611.323 - 13712.148: 99.2030% ( 8) 00:09:52.648 13712.148 - 13812.972: 99.2397% ( 7) 00:09:52.648 13812.972 - 13913.797: 99.2817% ( 8) 00:09:52.648 13913.797 - 14014.622: 99.3184% ( 7) 00:09:52.648 14014.622 - 14115.446: 99.3289% ( 2) 00:09:52.648 21778.117 - 21878.942: 99.3393% ( 2) 00:09:52.648 21878.942 - 21979.766: 99.3760% ( 7) 00:09:52.648 21979.766 - 22080.591: 99.4128% ( 7) 00:09:52.648 22080.591 - 22181.415: 99.4442% ( 6) 00:09:52.648 22181.415 - 22282.240: 99.4862% ( 8) 00:09:52.648 22282.240 - 22383.065: 99.5071% ( 4) 00:09:52.648 22383.065 - 22483.889: 99.5176% ( 2) 00:09:52.648 22483.889 - 22584.714: 99.5333% ( 3) 00:09:52.648 22584.714 - 22685.538: 99.5543% ( 4) 00:09:52.648 22685.538 - 22786.363: 99.5648% ( 2) 00:09:52.648 22786.363 - 22887.188: 99.5805% ( 3) 00:09:52.648 22887.188 - 22988.012: 99.5963% ( 3) 00:09:52.648 22988.012 - 23088.837: 99.6120% ( 3) 00:09:52.648 23088.837 - 23189.662: 99.6277% ( 3) 00:09:52.648 23189.662 - 23290.486: 99.6435% ( 3) 00:09:52.648 23290.486 - 23391.311: 99.6644% ( 4) 00:09:52.648 23391.311 - 23492.135: 99.6802% ( 3) 00:09:52.648 23492.135 - 23592.960: 99.7011% ( 4) 00:09:52.648 23592.960 - 23693.785: 99.7221% ( 4) 00:09:52.648 23693.785 - 23794.609: 99.7431% ( 4) 00:09:52.648 23794.609 - 23895.434: 99.7641% ( 4) 00:09:52.648 23895.434 - 23996.258: 99.7850% ( 4) 00:09:52.648 23996.258 - 24097.083: 99.8008% ( 3) 00:09:52.648 24097.083 - 24197.908: 99.8217% ( 4) 00:09:52.648 24197.908 - 24298.732: 99.8427% ( 4) 00:09:52.648 24298.732 - 24399.557: 99.8637% ( 4) 00:09:52.648 24399.557 - 24500.382: 99.8794% ( 3) 00:09:52.648 24500.382 - 24601.206: 99.9004% ( 4) 00:09:52.648 24601.206 - 24702.031: 99.9214% ( 4) 00:09:52.648 24702.031 - 24802.855: 99.9423% ( 4) 00:09:52.648 24802.855 - 24903.680: 99.9633% ( 4) 00:09:52.648 24903.680 - 25004.505: 99.9843% ( 4) 00:09:52.648 25004.505 - 25105.329: 100.0000% ( 3) 00:09:52.648 00:09:52.648 Latency histogram for PCIE (0000:00:08.0) NSID 2 from core 0: 00:09:52.648 ============================================================================== 00:09:52.648 Range in us Cumulative IO count 00:09:52.648 4864.788 - 4889.994: 0.0052% ( 1) 00:09:52.648 4915.200 - 4940.406: 0.0105% ( 1) 00:09:52.649 4990.818 - 5016.025: 0.0157% ( 1) 00:09:52.649 5041.231 - 5066.437: 0.0419% ( 5) 00:09:52.649 5066.437 - 5091.643: 0.1101% ( 13) 00:09:52.649 5091.643 - 5116.849: 0.1730% ( 12) 00:09:52.649 5116.849 - 5142.055: 0.2674% ( 18) 00:09:52.649 5142.055 - 5167.262: 0.3408% ( 14) 00:09:52.649 5167.262 - 5192.468: 0.3985% ( 11) 00:09:52.649 5192.468 - 5217.674: 0.4509% ( 10) 00:09:52.649 5217.674 - 5242.880: 0.5296% ( 15) 00:09:52.649 5242.880 - 5268.086: 0.6187% ( 17) 00:09:52.649 5268.086 - 5293.292: 0.7288% ( 21) 00:09:52.649 5293.292 - 5318.498: 0.8284% ( 19) 00:09:52.649 5318.498 - 5343.705: 1.0067% ( 34) 00:09:52.649 5343.705 - 5368.911: 1.3475% ( 65) 00:09:52.649 5368.911 - 5394.117: 1.5887% ( 46) 00:09:52.649 5394.117 - 5419.323: 1.9138% ( 62) 00:09:52.649 5419.323 - 5444.529: 2.1602% ( 47) 00:09:52.649 5444.529 - 5469.735: 2.4119% ( 48) 00:09:52.649 5469.735 - 5494.942: 2.7055% ( 56) 00:09:52.649 5494.942 - 5520.148: 3.0201% ( 60) 00:09:52.649 5520.148 - 5545.354: 3.3085% ( 55) 00:09:52.649 5545.354 - 5570.560: 3.6284% ( 61) 00:09:52.649 5570.560 - 5595.766: 4.0478% ( 80) 00:09:52.649 5595.766 - 5620.972: 4.3991% ( 67) 00:09:52.649 5620.972 - 5646.178: 4.9444% ( 104) 00:09:52.649 5646.178 - 5671.385: 5.3796% ( 83) 00:09:52.649 5671.385 - 5696.591: 5.9616% ( 111) 00:09:52.649 5696.591 - 5721.797: 6.6275% ( 127) 00:09:52.649 5721.797 - 5747.003: 7.3196% ( 132) 00:09:52.649 5747.003 - 5772.209: 8.1481% ( 158) 00:09:52.649 5772.209 - 5797.415: 8.9975% ( 162) 00:09:52.649 5797.415 - 5822.622: 10.2664% ( 242) 00:09:52.649 5822.622 - 5847.828: 11.3203% ( 201) 00:09:52.649 5847.828 - 5873.034: 12.5210% ( 229) 00:09:52.649 5873.034 - 5898.240: 13.9471% ( 272) 00:09:52.649 5898.240 - 5923.446: 15.1374% ( 227) 00:09:52.649 5923.446 - 5948.652: 16.6370% ( 286) 00:09:52.649 5948.652 - 5973.858: 18.1418% ( 287) 00:09:52.649 5973.858 - 5999.065: 19.6676% ( 291) 00:09:52.649 5999.065 - 6024.271: 21.2196% ( 296) 00:09:52.649 6024.271 - 6049.477: 22.8974% ( 320) 00:09:52.649 6049.477 - 6074.683: 24.7326% ( 350) 00:09:52.649 6074.683 - 6099.889: 26.9453% ( 422) 00:09:52.649 6099.889 - 6125.095: 29.3310% ( 455) 00:09:52.649 6125.095 - 6150.302: 32.5451% ( 613) 00:09:52.649 6150.302 - 6175.508: 34.9832% ( 465) 00:09:52.649 6175.508 - 6200.714: 37.3689% ( 455) 00:09:52.649 6200.714 - 6225.920: 39.6340% ( 432) 00:09:52.649 6225.920 - 6251.126: 42.4077% ( 529) 00:09:52.649 6251.126 - 6276.332: 45.5222% ( 594) 00:09:52.649 6276.332 - 6301.538: 48.5057% ( 569) 00:09:52.649 6301.538 - 6326.745: 50.5558% ( 391) 00:09:52.649 6326.745 - 6351.951: 52.7947% ( 427) 00:09:52.649 6351.951 - 6377.157: 55.1227% ( 444) 00:09:52.649 6377.157 - 6402.363: 57.5975% ( 472) 00:09:52.649 6402.363 - 6427.569: 59.3226% ( 329) 00:09:52.649 6427.569 - 6452.775: 61.1944% ( 357) 00:09:52.649 6452.775 - 6503.188: 63.9786% ( 531) 00:09:52.649 6503.188 - 6553.600: 66.9883% ( 574) 00:09:52.649 6553.600 - 6604.012: 69.4578% ( 471) 00:09:52.649 6604.012 - 6654.425: 72.2473% ( 532) 00:09:52.649 6654.425 - 6704.837: 74.2135% ( 375) 00:09:52.649 6704.837 - 6755.249: 76.2374% ( 386) 00:09:52.649 6755.249 - 6805.662: 78.4344% ( 419) 00:09:52.649 6805.662 - 6856.074: 79.9549% ( 290) 00:09:52.649 6856.074 - 6906.486: 81.2081% ( 239) 00:09:52.649 6906.486 - 6956.898: 82.6395% ( 273) 00:09:52.649 6956.898 - 7007.311: 83.9083% ( 242) 00:09:52.649 7007.311 - 7057.723: 84.6686% ( 145) 00:09:52.649 7057.723 - 7108.135: 85.7120% ( 199) 00:09:52.649 7108.135 - 7158.548: 86.5510% ( 160) 00:09:52.649 7158.548 - 7208.960: 87.0019% ( 86) 00:09:52.649 7208.960 - 7259.372: 87.4109% ( 78) 00:09:52.649 7259.372 - 7309.785: 87.7464% ( 64) 00:09:52.649 7309.785 - 7360.197: 88.0610% ( 60) 00:09:52.649 7360.197 - 7410.609: 88.4018% ( 65) 00:09:52.649 7410.609 - 7461.022: 88.6850% ( 54) 00:09:52.649 7461.022 - 7511.434: 89.1621% ( 91) 00:09:52.649 7511.434 - 7561.846: 89.3928% ( 44) 00:09:52.649 7561.846 - 7612.258: 89.6026% ( 40) 00:09:52.649 7612.258 - 7662.671: 89.8123% ( 40) 00:09:52.649 7662.671 - 7713.083: 89.9853% ( 33) 00:09:52.649 7713.083 - 7763.495: 90.3104% ( 62) 00:09:52.649 7763.495 - 7813.908: 90.4729% ( 31) 00:09:52.649 7813.908 - 7864.320: 90.5935% ( 23) 00:09:52.649 7864.320 - 7914.732: 90.6774% ( 16) 00:09:52.649 7914.732 - 7965.145: 90.8138% ( 26) 00:09:52.649 7965.145 - 8015.557: 90.9344% ( 23) 00:09:52.649 8015.557 - 8065.969: 91.1388% ( 39) 00:09:52.649 8065.969 - 8116.382: 91.5216% ( 73) 00:09:52.649 8116.382 - 8166.794: 91.6684% ( 28) 00:09:52.649 8166.794 - 8217.206: 91.8677% ( 38) 00:09:52.649 8217.206 - 8267.618: 92.3291% ( 88) 00:09:52.649 8267.618 - 8318.031: 92.4497% ( 23) 00:09:52.649 8318.031 - 8368.443: 92.5650% ( 22) 00:09:52.649 8368.443 - 8418.855: 92.7013% ( 26) 00:09:52.649 8418.855 - 8469.268: 92.9216% ( 42) 00:09:52.649 8469.268 - 8519.680: 93.0474% ( 24) 00:09:52.649 8519.680 - 8570.092: 93.1575% ( 21) 00:09:52.649 8570.092 - 8620.505: 93.2833% ( 24) 00:09:52.649 8620.505 - 8670.917: 93.5455% ( 50) 00:09:52.649 8670.917 - 8721.329: 93.8339% ( 55) 00:09:52.649 8721.329 - 8771.742: 94.0751% ( 46) 00:09:52.649 8771.742 - 8822.154: 94.3005% ( 43) 00:09:52.649 8822.154 - 8872.566: 94.6099% ( 59) 00:09:52.649 8872.566 - 8922.978: 94.7672% ( 30) 00:09:52.649 8922.978 - 8973.391: 94.8668% ( 19) 00:09:52.649 8973.391 - 9023.803: 94.9612% ( 18) 00:09:52.649 9023.803 - 9074.215: 95.0556% ( 18) 00:09:52.649 9074.215 - 9124.628: 95.1395% ( 16) 00:09:52.649 9124.628 - 9175.040: 95.1971% ( 11) 00:09:52.649 9175.040 - 9225.452: 95.2915% ( 18) 00:09:52.649 9225.452 - 9275.865: 95.3911% ( 19) 00:09:52.649 9275.865 - 9326.277: 95.5275% ( 26) 00:09:52.649 9326.277 - 9376.689: 95.6428% ( 22) 00:09:52.649 9376.689 - 9427.102: 95.7372% ( 18) 00:09:52.649 9427.102 - 9477.514: 95.8159% ( 15) 00:09:52.649 9477.514 - 9527.926: 95.8945% ( 15) 00:09:52.649 9527.926 - 9578.338: 95.9732% ( 15) 00:09:52.649 9578.338 - 9628.751: 96.3087% ( 64) 00:09:52.649 9628.751 - 9679.163: 96.3664% ( 11) 00:09:52.649 9679.163 - 9729.575: 96.4241% ( 11) 00:09:52.649 9729.575 - 9779.988: 96.4660% ( 8) 00:09:52.649 9779.988 - 9830.400: 96.5132% ( 9) 00:09:52.649 9830.400 - 9880.812: 96.5499% ( 7) 00:09:52.649 9880.812 - 9931.225: 96.6023% ( 10) 00:09:52.649 9931.225 - 9981.637: 96.6443% ( 8) 00:09:52.649 9981.637 - 10032.049: 96.6862% ( 8) 00:09:52.649 10032.049 - 10082.462: 96.7387% ( 10) 00:09:52.649 10082.462 - 10132.874: 96.7911% ( 10) 00:09:52.649 10132.874 - 10183.286: 96.8435% ( 10) 00:09:52.649 10183.286 - 10233.698: 96.8907% ( 9) 00:09:52.649 10233.698 - 10284.111: 96.9379% ( 9) 00:09:52.649 10284.111 - 10334.523: 96.9851% ( 9) 00:09:52.649 10334.523 - 10384.935: 97.0323% ( 9) 00:09:52.649 10384.935 - 10435.348: 97.0847% ( 10) 00:09:52.649 10435.348 - 10485.760: 97.1319% ( 9) 00:09:52.649 10485.760 - 10536.172: 97.1739% ( 8) 00:09:52.649 10536.172 - 10586.585: 97.1948% ( 4) 00:09:52.649 10586.585 - 10636.997: 97.2106% ( 3) 00:09:52.649 10636.997 - 10687.409: 97.2315% ( 4) 00:09:52.649 10687.409 - 10737.822: 97.2473% ( 3) 00:09:52.649 10737.822 - 10788.234: 97.2682% ( 4) 00:09:52.649 10788.234 - 10838.646: 97.2840% ( 3) 00:09:52.649 10838.646 - 10889.058: 97.2997% ( 3) 00:09:52.649 10889.058 - 10939.471: 97.3154% ( 3) 00:09:52.649 11040.295 - 11090.708: 97.3259% ( 2) 00:09:52.649 11090.708 - 11141.120: 97.3574% ( 6) 00:09:52.649 11141.120 - 11191.532: 97.4098% ( 10) 00:09:52.649 11191.532 - 11241.945: 97.4780% ( 13) 00:09:52.649 11241.945 - 11292.357: 97.5409% ( 12) 00:09:52.649 11292.357 - 11342.769: 97.6091% ( 13) 00:09:52.649 11342.769 - 11393.182: 97.6615% ( 10) 00:09:52.649 11393.182 - 11443.594: 97.7349% ( 14) 00:09:52.649 11443.594 - 11494.006: 97.8188% ( 16) 00:09:52.649 11494.006 - 11544.418: 98.0023% ( 35) 00:09:52.649 11544.418 - 11594.831: 98.0495% ( 9) 00:09:52.649 11594.831 - 11645.243: 98.1019% ( 10) 00:09:52.649 11645.243 - 11695.655: 98.1648% ( 12) 00:09:52.649 11695.655 - 11746.068: 98.2383% ( 14) 00:09:52.649 11746.068 - 11796.480: 98.2854% ( 9) 00:09:52.649 11796.480 - 11846.892: 98.3169% ( 6) 00:09:52.649 11846.892 - 11897.305: 98.3484% ( 6) 00:09:52.649 11897.305 - 11947.717: 98.3956% ( 9) 00:09:52.649 11947.717 - 11998.129: 98.4427% ( 9) 00:09:52.649 11998.129 - 12048.542: 98.4952% ( 10) 00:09:52.649 12048.542 - 12098.954: 98.5476% ( 10) 00:09:52.649 12098.954 - 12149.366: 98.6053% ( 11) 00:09:52.649 12149.366 - 12199.778: 98.6577% ( 10) 00:09:52.649 12199.778 - 12250.191: 98.7049% ( 9) 00:09:52.649 12250.191 - 12300.603: 98.7469% ( 8) 00:09:52.650 12300.603 - 12351.015: 98.7836% ( 7) 00:09:52.650 12351.015 - 12401.428: 98.8098% ( 5) 00:09:52.650 12401.428 - 12451.840: 98.8517% ( 8) 00:09:52.650 12451.840 - 12502.252: 98.8884% ( 7) 00:09:52.650 12502.252 - 12552.665: 98.9094% ( 4) 00:09:52.650 12552.665 - 12603.077: 98.9251% ( 3) 00:09:52.650 12603.077 - 12653.489: 98.9461% ( 4) 00:09:52.650 12653.489 - 12703.902: 98.9671% ( 4) 00:09:52.650 12703.902 - 12754.314: 98.9828% ( 3) 00:09:52.650 12754.314 - 12804.726: 99.0038% ( 4) 00:09:52.650 12804.726 - 12855.138: 99.0247% ( 4) 00:09:52.650 12855.138 - 12905.551: 99.0457% ( 4) 00:09:52.650 12905.551 - 13006.375: 99.0877% ( 8) 00:09:52.650 13006.375 - 13107.200: 99.1296% ( 8) 00:09:52.650 13107.200 - 13208.025: 99.1663% ( 7) 00:09:52.650 13208.025 - 13308.849: 99.2083% ( 8) 00:09:52.650 13308.849 - 13409.674: 99.2502% ( 8) 00:09:52.650 13409.674 - 13510.498: 99.2922% ( 8) 00:09:52.650 13510.498 - 13611.323: 99.3236% ( 6) 00:09:52.650 13611.323 - 13712.148: 99.3289% ( 1) 00:09:52.650 21475.643 - 21576.468: 99.3341% ( 1) 00:09:52.650 22181.415 - 22282.240: 99.3656% ( 6) 00:09:52.650 22282.240 - 22383.065: 99.3813% ( 3) 00:09:52.650 22383.065 - 22483.889: 99.4075% ( 5) 00:09:52.650 22483.889 - 22584.714: 99.4337% ( 5) 00:09:52.650 22584.714 - 22685.538: 99.4547% ( 4) 00:09:52.650 22685.538 - 22786.363: 99.4809% ( 5) 00:09:52.650 22786.363 - 22887.188: 99.5176% ( 7) 00:09:52.650 22887.188 - 22988.012: 99.5648% ( 9) 00:09:52.650 22988.012 - 23088.837: 99.6068% ( 8) 00:09:52.650 23088.837 - 23189.662: 99.6539% ( 9) 00:09:52.650 23189.662 - 23290.486: 99.7273% ( 14) 00:09:52.650 23290.486 - 23391.311: 99.7641% ( 7) 00:09:52.650 23391.311 - 23492.135: 99.8060% ( 8) 00:09:52.650 23492.135 - 23592.960: 99.8322% ( 5) 00:09:52.650 23592.960 - 23693.785: 99.8479% ( 3) 00:09:52.650 23693.785 - 23794.609: 99.8637% ( 3) 00:09:52.650 23794.609 - 23895.434: 99.8846% ( 4) 00:09:52.650 23895.434 - 23996.258: 99.9004% ( 3) 00:09:52.650 23996.258 - 24097.083: 99.9161% ( 3) 00:09:52.650 24097.083 - 24197.908: 99.9318% ( 3) 00:09:52.650 24197.908 - 24298.732: 99.9528% ( 4) 00:09:52.650 24298.732 - 24399.557: 99.9685% ( 3) 00:09:52.650 24399.557 - 24500.382: 99.9895% ( 4) 00:09:52.650 24500.382 - 24601.206: 100.0000% ( 2) 00:09:52.650 00:09:52.650 Latency histogram for PCIE (0000:00:08.0) NSID 3 from core 0: 00:09:52.650 ============================================================================== 00:09:52.650 Range in us Cumulative IO count 00:09:52.650 4738.757 - 4763.963: 0.0052% ( 1) 00:09:52.650 4915.200 - 4940.406: 0.0210% ( 3) 00:09:52.650 4990.818 - 5016.025: 0.0262% ( 1) 00:09:52.650 5041.231 - 5066.437: 0.0367% ( 2) 00:09:52.650 5066.437 - 5091.643: 0.0734% ( 7) 00:09:52.650 5091.643 - 5116.849: 0.1416% ( 13) 00:09:52.650 5116.849 - 5142.055: 0.2255% ( 16) 00:09:52.650 5142.055 - 5167.262: 0.3094% ( 16) 00:09:52.650 5167.262 - 5192.468: 0.3618% ( 10) 00:09:52.650 5192.468 - 5217.674: 0.4404% ( 15) 00:09:52.650 5217.674 - 5242.880: 0.5296% ( 17) 00:09:52.650 5242.880 - 5268.086: 0.6449% ( 22) 00:09:52.650 5268.086 - 5293.292: 0.8022% ( 30) 00:09:52.650 5293.292 - 5318.498: 0.9438% ( 27) 00:09:52.650 5318.498 - 5343.705: 1.1483% ( 39) 00:09:52.650 5343.705 - 5368.911: 1.3580% ( 40) 00:09:52.650 5368.911 - 5394.117: 1.6307% ( 52) 00:09:52.650 5394.117 - 5419.323: 1.9033% ( 52) 00:09:52.650 5419.323 - 5444.529: 2.2913% ( 74) 00:09:52.650 5444.529 - 5469.735: 2.5587% ( 51) 00:09:52.650 5469.735 - 5494.942: 2.8261% ( 51) 00:09:52.650 5494.942 - 5520.148: 3.1565% ( 63) 00:09:52.650 5520.148 - 5545.354: 3.4501% ( 56) 00:09:52.650 5545.354 - 5570.560: 3.8328% ( 73) 00:09:52.650 5570.560 - 5595.766: 4.2785% ( 85) 00:09:52.650 5595.766 - 5620.972: 4.6665% ( 74) 00:09:52.650 5620.972 - 5646.178: 5.1174% ( 86) 00:09:52.650 5646.178 - 5671.385: 5.6523% ( 102) 00:09:52.650 5671.385 - 5696.591: 6.4597% ( 154) 00:09:52.650 5696.591 - 5721.797: 7.2043% ( 142) 00:09:52.650 5721.797 - 5747.003: 7.8649% ( 126) 00:09:52.650 5747.003 - 5772.209: 8.7248% ( 164) 00:09:52.650 5772.209 - 5797.415: 9.6005% ( 167) 00:09:52.650 5797.415 - 5822.622: 10.8012% ( 229) 00:09:52.650 5822.622 - 5847.828: 11.8131% ( 193) 00:09:52.650 5847.828 - 5873.034: 13.1607% ( 257) 00:09:52.650 5873.034 - 5898.240: 14.3666% ( 230) 00:09:52.650 5898.240 - 5923.446: 15.6722% ( 249) 00:09:52.650 5923.446 - 5948.652: 17.2452% ( 300) 00:09:52.650 5948.652 - 5973.858: 18.7762% ( 292) 00:09:52.650 5973.858 - 5999.065: 20.3911% ( 308) 00:09:52.650 5999.065 - 6024.271: 22.0638% ( 319) 00:09:52.650 6024.271 - 6049.477: 24.0510% ( 379) 00:09:52.650 6049.477 - 6074.683: 25.5663% ( 289) 00:09:52.650 6074.683 - 6099.889: 27.4958% ( 368) 00:09:52.650 6099.889 - 6125.095: 29.5302% ( 388) 00:09:52.650 6125.095 - 6150.302: 32.0417% ( 479) 00:09:52.650 6150.302 - 6175.508: 34.3698% ( 444) 00:09:52.650 6175.508 - 6200.714: 36.7450% ( 453) 00:09:52.650 6200.714 - 6225.920: 40.1164% ( 643) 00:09:52.650 6225.920 - 6251.126: 43.0946% ( 568) 00:09:52.650 6251.126 - 6276.332: 46.1514% ( 583) 00:09:52.650 6276.332 - 6301.538: 48.1753% ( 386) 00:09:52.650 6301.538 - 6326.745: 50.9595% ( 531) 00:09:52.650 6326.745 - 6351.951: 53.5078% ( 486) 00:09:52.650 6351.951 - 6377.157: 55.7833% ( 434) 00:09:52.650 6377.157 - 6402.363: 57.5818% ( 343) 00:09:52.650 6402.363 - 6427.569: 59.3226% ( 332) 00:09:52.650 6427.569 - 6452.775: 60.6753% ( 258) 00:09:52.650 6452.775 - 6503.188: 63.5224% ( 543) 00:09:52.650 6503.188 - 6553.600: 66.4692% ( 562) 00:09:52.650 6553.600 - 6604.012: 68.8129% ( 447) 00:09:52.650 6604.012 - 6654.425: 71.2773% ( 470) 00:09:52.650 6654.425 - 6704.837: 73.4532% ( 415) 00:09:52.650 6704.837 - 6755.249: 75.4247% ( 376) 00:09:52.650 6755.249 - 6805.662: 77.1078% ( 321) 00:09:52.650 6805.662 - 6856.074: 79.0321% ( 367) 00:09:52.650 6856.074 - 6906.486: 80.3953% ( 260) 00:09:52.650 6906.486 - 6956.898: 81.9578% ( 298) 00:09:52.650 6956.898 - 7007.311: 83.7668% ( 345) 00:09:52.650 7007.311 - 7057.723: 84.8049% ( 198) 00:09:52.650 7057.723 - 7108.135: 85.4446% ( 122) 00:09:52.650 7108.135 - 7158.548: 85.9899% ( 104) 00:09:52.650 7158.548 - 7208.960: 86.9757% ( 188) 00:09:52.650 7208.960 - 7259.372: 87.3112% ( 64) 00:09:52.650 7259.372 - 7309.785: 87.6468% ( 64) 00:09:52.650 7309.785 - 7360.197: 88.2131% ( 108) 00:09:52.650 7360.197 - 7410.609: 88.5644% ( 67) 00:09:52.650 7410.609 - 7461.022: 88.8895% ( 62) 00:09:52.650 7461.022 - 7511.434: 89.1359% ( 47) 00:09:52.650 7511.434 - 7561.846: 89.2932% ( 30) 00:09:52.650 7561.846 - 7612.258: 89.4400% ( 28) 00:09:52.650 7612.258 - 7662.671: 89.6340% ( 37) 00:09:52.650 7662.671 - 7713.083: 89.7651% ( 25) 00:09:52.650 7713.083 - 7763.495: 89.8857% ( 23) 00:09:52.650 7763.495 - 7813.908: 89.9906% ( 20) 00:09:52.650 7813.908 - 7864.320: 90.1007% ( 21) 00:09:52.650 7864.320 - 7914.732: 90.1951% ( 18) 00:09:52.650 7914.732 - 7965.145: 90.3209% ( 24) 00:09:52.650 7965.145 - 8015.557: 90.4729% ( 29) 00:09:52.650 8015.557 - 8065.969: 90.6407% ( 32) 00:09:52.650 8065.969 - 8116.382: 90.8033% ( 31) 00:09:52.650 8116.382 - 8166.794: 90.9606% ( 30) 00:09:52.650 8166.794 - 8217.206: 91.0917% ( 25) 00:09:52.650 8217.206 - 8267.618: 91.2332% ( 27) 00:09:52.650 8267.618 - 8318.031: 91.3800% ( 28) 00:09:52.650 8318.031 - 8368.443: 91.5426% ( 31) 00:09:52.650 8368.443 - 8418.855: 91.7523% ( 40) 00:09:52.650 8418.855 - 8469.268: 92.0984% ( 66) 00:09:52.650 8469.268 - 8519.680: 92.3658% ( 51) 00:09:52.650 8519.680 - 8570.092: 92.6070% ( 46) 00:09:52.650 8570.092 - 8620.505: 93.1260% ( 99) 00:09:52.650 8620.505 - 8670.917: 93.5350% ( 78) 00:09:52.650 8670.917 - 8721.329: 93.7448% ( 40) 00:09:52.650 8721.329 - 8771.742: 93.8916% ( 28) 00:09:52.651 8771.742 - 8822.154: 94.0436% ( 29) 00:09:52.651 8822.154 - 8872.566: 94.2219% ( 34) 00:09:52.651 8872.566 - 8922.978: 94.4054% ( 35) 00:09:52.651 8922.978 - 8973.391: 94.6309% ( 43) 00:09:52.651 8973.391 - 9023.803: 95.0136% ( 73) 00:09:52.651 9023.803 - 9074.215: 95.3649% ( 67) 00:09:52.651 9074.215 - 9124.628: 95.6533% ( 55) 00:09:52.651 9124.628 - 9175.040: 95.8893% ( 45) 00:09:52.651 9175.040 - 9225.452: 96.0623% ( 33) 00:09:52.651 9225.452 - 9275.865: 96.2039% ( 27) 00:09:52.651 9275.865 - 9326.277: 96.5447% ( 65) 00:09:52.651 9326.277 - 9376.689: 96.6600% ( 22) 00:09:52.651 9376.689 - 9427.102: 96.7492% ( 17) 00:09:52.651 9427.102 - 9477.514: 96.8173% ( 13) 00:09:52.651 9477.514 - 9527.926: 96.8750% ( 11) 00:09:52.651 9527.926 - 9578.338: 96.9274% ( 10) 00:09:52.651 9578.338 - 9628.751: 96.9641% ( 7) 00:09:52.651 9628.751 - 9679.163: 97.0166% ( 10) 00:09:52.651 9679.163 - 9729.575: 97.0428% ( 5) 00:09:52.651 9729.575 - 9779.988: 97.0638% ( 4) 00:09:52.651 9779.988 - 9830.400: 97.0900% ( 5) 00:09:52.651 9830.400 - 9880.812: 97.1214% ( 6) 00:09:52.651 9880.812 - 9931.225: 97.1581% ( 7) 00:09:52.651 9931.225 - 9981.637: 97.1844% ( 5) 00:09:52.651 9981.637 - 10032.049: 97.2158% ( 6) 00:09:52.651 10032.049 - 10082.462: 97.2473% ( 6) 00:09:52.651 10082.462 - 10132.874: 97.2578% ( 2) 00:09:52.651 10132.874 - 10183.286: 97.2735% ( 3) 00:09:52.651 10183.286 - 10233.698: 97.2840% ( 2) 00:09:52.651 10233.698 - 10284.111: 97.2997% ( 3) 00:09:52.651 10284.111 - 10334.523: 97.3154% ( 3) 00:09:52.651 10536.172 - 10586.585: 97.4203% ( 20) 00:09:52.651 10586.585 - 10636.997: 97.4465% ( 5) 00:09:52.651 10636.997 - 10687.409: 97.4518% ( 1) 00:09:52.651 10687.409 - 10737.822: 97.4622% ( 2) 00:09:52.651 10737.822 - 10788.234: 97.4727% ( 2) 00:09:52.651 10788.234 - 10838.646: 97.4885% ( 3) 00:09:52.651 10838.646 - 10889.058: 97.5094% ( 4) 00:09:52.651 10889.058 - 10939.471: 97.5304% ( 4) 00:09:52.651 10939.471 - 10989.883: 97.5566% ( 5) 00:09:52.651 10989.883 - 11040.295: 97.5776% ( 4) 00:09:52.651 11040.295 - 11090.708: 97.5986% ( 4) 00:09:52.651 11090.708 - 11141.120: 97.6143% ( 3) 00:09:52.651 11141.120 - 11191.532: 97.6353% ( 4) 00:09:52.651 11191.532 - 11241.945: 97.6562% ( 4) 00:09:52.651 11241.945 - 11292.357: 97.6772% ( 4) 00:09:52.651 11292.357 - 11342.769: 97.6982% ( 4) 00:09:52.651 11342.769 - 11393.182: 97.7297% ( 6) 00:09:52.651 11393.182 - 11443.594: 97.7768% ( 9) 00:09:52.651 11443.594 - 11494.006: 97.8293% ( 10) 00:09:52.651 11494.006 - 11544.418: 97.8765% ( 9) 00:09:52.651 11544.418 - 11594.831: 97.9394% ( 12) 00:09:52.651 11594.831 - 11645.243: 97.9918% ( 10) 00:09:52.651 11645.243 - 11695.655: 98.0390% ( 9) 00:09:52.651 11695.655 - 11746.068: 98.0967% ( 11) 00:09:52.651 11746.068 - 11796.480: 98.1753% ( 15) 00:09:52.651 11796.480 - 11846.892: 98.3274% ( 29) 00:09:52.651 11846.892 - 11897.305: 98.3746% ( 9) 00:09:52.651 11897.305 - 11947.717: 98.4060% ( 6) 00:09:52.651 11947.717 - 11998.129: 98.4427% ( 7) 00:09:52.651 11998.129 - 12048.542: 98.4742% ( 6) 00:09:52.651 12048.542 - 12098.954: 98.5371% ( 12) 00:09:52.651 12098.954 - 12149.366: 98.5896% ( 10) 00:09:52.651 12149.366 - 12199.778: 98.6263% ( 7) 00:09:52.651 12199.778 - 12250.191: 98.6577% ( 6) 00:09:52.651 12250.191 - 12300.603: 98.6997% ( 8) 00:09:52.651 12300.603 - 12351.015: 98.7311% ( 6) 00:09:52.651 12351.015 - 12401.428: 98.7731% ( 8) 00:09:52.651 12401.428 - 12451.840: 98.8098% ( 7) 00:09:52.651 12451.840 - 12502.252: 98.8360% ( 5) 00:09:52.651 12502.252 - 12552.665: 98.8622% ( 5) 00:09:52.651 12552.665 - 12603.077: 98.8832% ( 4) 00:09:52.651 12603.077 - 12653.489: 98.8989% ( 3) 00:09:52.651 12653.489 - 12703.902: 98.9199% ( 4) 00:09:52.651 12703.902 - 12754.314: 98.9409% ( 4) 00:09:52.651 12754.314 - 12804.726: 98.9618% ( 4) 00:09:52.651 12804.726 - 12855.138: 98.9828% ( 4) 00:09:52.651 12855.138 - 12905.551: 99.0038% ( 4) 00:09:52.651 12905.551 - 13006.375: 99.0352% ( 6) 00:09:52.651 13006.375 - 13107.200: 99.0772% ( 8) 00:09:52.651 13107.200 - 13208.025: 99.1139% ( 7) 00:09:52.651 13208.025 - 13308.849: 99.1558% ( 8) 00:09:52.651 13308.849 - 13409.674: 99.1925% ( 7) 00:09:52.651 13409.674 - 13510.498: 99.2345% ( 8) 00:09:52.651 13510.498 - 13611.323: 99.2764% ( 8) 00:09:52.651 13611.323 - 13712.148: 99.3131% ( 7) 00:09:52.651 13712.148 - 13812.972: 99.3289% ( 3) 00:09:52.651 21677.292 - 21778.117: 99.3498% ( 4) 00:09:52.651 21778.117 - 21878.942: 99.3760% ( 5) 00:09:52.651 21878.942 - 21979.766: 99.3970% ( 4) 00:09:52.651 21979.766 - 22080.591: 99.4285% ( 6) 00:09:52.651 22080.591 - 22181.415: 99.4547% ( 5) 00:09:52.651 22181.415 - 22282.240: 99.5019% ( 9) 00:09:52.651 22282.240 - 22383.065: 99.5491% ( 9) 00:09:52.651 22383.065 - 22483.889: 99.7903% ( 46) 00:09:52.651 22483.889 - 22584.714: 99.8165% ( 5) 00:09:52.651 22988.012 - 23088.837: 99.8322% ( 3) 00:09:52.651 23088.837 - 23189.662: 99.8479% ( 3) 00:09:52.651 23189.662 - 23290.486: 99.8637% ( 3) 00:09:52.651 23290.486 - 23391.311: 99.8794% ( 3) 00:09:52.651 23391.311 - 23492.135: 99.8951% ( 3) 00:09:52.651 23492.135 - 23592.960: 99.9161% ( 4) 00:09:52.651 23592.960 - 23693.785: 99.9266% ( 2) 00:09:52.651 23693.785 - 23794.609: 99.9423% ( 3) 00:09:52.651 23794.609 - 23895.434: 99.9633% ( 4) 00:09:52.651 23895.434 - 23996.258: 99.9790% ( 3) 00:09:52.651 23996.258 - 24097.083: 99.9948% ( 3) 00:09:52.651 24097.083 - 24197.908: 100.0000% ( 1) 00:09:52.651 00:09:52.651 16:19:12 -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:52.651 00:09:52.651 real 0m2.574s 00:09:52.651 user 0m2.268s 00:09:52.651 sys 0m0.214s 00:09:52.651 16:19:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.651 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:09:52.651 ************************************ 00:09:52.651 END TEST nvme_perf 00:09:52.651 ************************************ 00:09:52.651 16:19:12 -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:52.651 16:19:12 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:52.651 16:19:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.651 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:09:52.651 ************************************ 00:09:52.651 START TEST nvme_hello_world 00:09:52.651 ************************************ 00:09:52.651 16:19:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:52.909 Initializing NVMe Controllers 00:09:52.909 Attached to 0000:00:06.0 00:09:52.909 Namespace ID: 1 size: 6GB 00:09:52.909 Attached to 0000:00:07.0 00:09:52.909 Namespace ID: 1 size: 5GB 00:09:52.909 Attached to 0000:00:09.0 00:09:52.909 Namespace ID: 1 size: 1GB 00:09:52.909 Attached to 0000:00:08.0 00:09:52.909 Namespace ID: 1 size: 4GB 00:09:52.909 Namespace ID: 2 size: 4GB 00:09:52.909 Namespace ID: 3 size: 4GB 00:09:52.909 Initialization complete. 00:09:52.909 INFO: using host memory buffer for IO 00:09:52.909 Hello world! 00:09:52.909 INFO: using host memory buffer for IO 00:09:52.909 Hello world! 00:09:52.909 INFO: using host memory buffer for IO 00:09:52.909 Hello world! 00:09:52.909 INFO: using host memory buffer for IO 00:09:52.909 Hello world! 00:09:52.909 INFO: using host memory buffer for IO 00:09:52.909 Hello world! 00:09:52.909 INFO: using host memory buffer for IO 00:09:52.909 Hello world! 00:09:52.909 00:09:52.909 real 0m0.273s 00:09:52.909 user 0m0.123s 00:09:52.909 sys 0m0.101s 00:09:52.909 16:19:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:52.909 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:09:52.909 ************************************ 00:09:52.909 END TEST nvme_hello_world 00:09:52.909 ************************************ 00:09:52.909 16:19:12 -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:52.909 16:19:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:52.909 16:19:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:52.909 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:09:52.909 ************************************ 00:09:52.909 START TEST nvme_sgl 00:09:52.909 ************************************ 00:09:52.909 16:19:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:52.909 0000:00:06.0: build_io_request_0 Invalid IO length parameter 00:09:52.909 0000:00:06.0: build_io_request_1 Invalid IO length parameter 00:09:53.169 0000:00:06.0: build_io_request_3 Invalid IO length parameter 00:09:53.169 0000:00:06.0: build_io_request_8 Invalid IO length parameter 00:09:53.169 0000:00:06.0: build_io_request_9 Invalid IO length parameter 00:09:53.169 0000:00:06.0: build_io_request_11 Invalid IO length parameter 00:09:53.169 0000:00:07.0: build_io_request_0 Invalid IO length parameter 00:09:53.169 0000:00:07.0: build_io_request_1 Invalid IO length parameter 00:09:53.169 0000:00:07.0: build_io_request_3 Invalid IO length parameter 00:09:53.169 0000:00:07.0: build_io_request_8 Invalid IO length parameter 00:09:53.169 0000:00:07.0: build_io_request_9 Invalid IO length parameter 00:09:53.169 0000:00:07.0: build_io_request_11 Invalid IO length parameter 00:09:53.169 0000:00:09.0: build_io_request_0 Invalid IO length parameter 00:09:53.169 0000:00:09.0: build_io_request_1 Invalid IO length parameter 00:09:53.169 0000:00:09.0: build_io_request_2 Invalid IO length parameter 00:09:53.169 0000:00:09.0: build_io_request_3 Invalid IO length parameter 00:09:53.169 0000:00:09.0: build_io_request_4 Invalid IO length parameter 00:09:53.169 0000:00:09.0: build_io_request_5 Invalid IO length parameter 00:09:53.169 0000:00:09.0: build_io_request_6 Invalid IO length parameter 00:09:53.169 0000:00:09.0: build_io_request_7 Invalid IO length parameter 00:09:53.169 0000:00:09.0: build_io_request_8 Invalid IO length parameter 00:09:53.169 0000:00:09.0: build_io_request_9 Invalid IO length parameter 00:09:53.169 0000:00:09.0: build_io_request_10 Invalid IO length parameter 00:09:53.169 0000:00:09.0: build_io_request_11 Invalid IO length parameter 00:09:53.169 0000:00:08.0: build_io_request_0 Invalid IO length parameter 00:09:53.169 0000:00:08.0: build_io_request_1 Invalid IO length parameter 00:09:53.169 0000:00:08.0: build_io_request_2 Invalid IO length parameter 00:09:53.169 0000:00:08.0: build_io_request_3 Invalid IO length parameter 00:09:53.169 0000:00:08.0: build_io_request_4 Invalid IO length parameter 00:09:53.169 0000:00:08.0: build_io_request_5 Invalid IO length parameter 00:09:53.169 0000:00:08.0: build_io_request_6 Invalid IO length parameter 00:09:53.169 0000:00:08.0: build_io_request_7 Invalid IO length parameter 00:09:53.169 0000:00:08.0: build_io_request_8 Invalid IO length parameter 00:09:53.169 0000:00:08.0: build_io_request_9 Invalid IO length parameter 00:09:53.169 0000:00:08.0: build_io_request_10 Invalid IO length parameter 00:09:53.169 0000:00:08.0: build_io_request_11 Invalid IO length parameter 00:09:53.169 NVMe Readv/Writev Request test 00:09:53.169 Attached to 0000:00:06.0 00:09:53.169 Attached to 0000:00:07.0 00:09:53.169 Attached to 0000:00:09.0 00:09:53.169 Attached to 0000:00:08.0 00:09:53.169 0000:00:06.0: build_io_request_2 test passed 00:09:53.169 0000:00:06.0: build_io_request_4 test passed 00:09:53.169 0000:00:06.0: build_io_request_5 test passed 00:09:53.169 0000:00:06.0: build_io_request_6 test passed 00:09:53.169 0000:00:06.0: build_io_request_7 test passed 00:09:53.169 0000:00:06.0: build_io_request_10 test passed 00:09:53.169 0000:00:07.0: build_io_request_2 test passed 00:09:53.169 0000:00:07.0: build_io_request_4 test passed 00:09:53.169 0000:00:07.0: build_io_request_5 test passed 00:09:53.170 0000:00:07.0: build_io_request_6 test passed 00:09:53.170 0000:00:07.0: build_io_request_7 test passed 00:09:53.170 0000:00:07.0: build_io_request_10 test passed 00:09:53.170 Cleaning up... 00:09:53.170 ************************************ 00:09:53.170 END TEST nvme_sgl 00:09:53.170 ************************************ 00:09:53.170 00:09:53.170 real 0m0.374s 00:09:53.170 user 0m0.243s 00:09:53.170 sys 0m0.090s 00:09:53.170 16:19:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:53.170 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:09:53.170 16:19:12 -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:53.170 16:19:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:53.170 16:19:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.170 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:09:53.170 ************************************ 00:09:53.170 START TEST nvme_e2edp 00:09:53.170 ************************************ 00:09:53.170 16:19:12 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:53.447 NVMe Write/Read with End-to-End data protection test 00:09:53.448 Attached to 0000:00:06.0 00:09:53.448 Attached to 0000:00:07.0 00:09:53.448 Attached to 0000:00:09.0 00:09:53.448 Attached to 0000:00:08.0 00:09:53.448 Cleaning up... 00:09:53.448 00:09:53.448 real 0m0.192s 00:09:53.448 user 0m0.055s 00:09:53.448 sys 0m0.093s 00:09:53.448 16:19:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:53.448 ************************************ 00:09:53.448 END TEST nvme_e2edp 00:09:53.448 ************************************ 00:09:53.448 16:19:13 -- common/autotest_common.sh@10 -- # set +x 00:09:53.448 16:19:13 -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:53.448 16:19:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:53.448 16:19:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.448 16:19:13 -- common/autotest_common.sh@10 -- # set +x 00:09:53.448 ************************************ 00:09:53.448 START TEST nvme_reserve 00:09:53.448 ************************************ 00:09:53.448 16:19:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:53.705 ===================================================== 00:09:53.705 NVMe Controller at PCI bus 0, device 6, function 0 00:09:53.705 ===================================================== 00:09:53.705 Reservations: Not Supported 00:09:53.705 ===================================================== 00:09:53.705 NVMe Controller at PCI bus 0, device 7, function 0 00:09:53.705 ===================================================== 00:09:53.705 Reservations: Not Supported 00:09:53.705 ===================================================== 00:09:53.705 NVMe Controller at PCI bus 0, device 9, function 0 00:09:53.705 ===================================================== 00:09:53.705 Reservations: Not Supported 00:09:53.705 ===================================================== 00:09:53.705 NVMe Controller at PCI bus 0, device 8, function 0 00:09:53.705 ===================================================== 00:09:53.705 Reservations: Not Supported 00:09:53.705 Reservation test passed 00:09:53.705 00:09:53.705 real 0m0.194s 00:09:53.705 user 0m0.056s 00:09:53.705 sys 0m0.096s 00:09:53.705 ************************************ 00:09:53.705 END TEST nvme_reserve 00:09:53.705 ************************************ 00:09:53.705 16:19:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:53.705 16:19:13 -- common/autotest_common.sh@10 -- # set +x 00:09:53.705 16:19:13 -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:53.705 16:19:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:53.705 16:19:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.705 16:19:13 -- common/autotest_common.sh@10 -- # set +x 00:09:53.705 ************************************ 00:09:53.705 START TEST nvme_err_injection 00:09:53.705 ************************************ 00:09:53.705 16:19:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:53.963 NVMe Error Injection test 00:09:53.963 Attached to 0000:00:06.0 00:09:53.963 Attached to 0000:00:07.0 00:09:53.963 Attached to 0000:00:09.0 00:09:53.963 Attached to 0000:00:08.0 00:09:53.963 0000:00:06.0: get features failed as expected 00:09:53.963 0000:00:07.0: get features failed as expected 00:09:53.963 0000:00:09.0: get features failed as expected 00:09:53.963 0000:00:08.0: get features failed as expected 00:09:53.963 0000:00:06.0: get features successfully as expected 00:09:53.963 0000:00:07.0: get features successfully as expected 00:09:53.963 0000:00:09.0: get features successfully as expected 00:09:53.963 0000:00:08.0: get features successfully as expected 00:09:53.963 0000:00:06.0: read failed as expected 00:09:53.963 0000:00:07.0: read failed as expected 00:09:53.963 0000:00:09.0: read failed as expected 00:09:53.963 0000:00:08.0: read failed as expected 00:09:53.963 0000:00:06.0: read successfully as expected 00:09:53.963 0000:00:07.0: read successfully as expected 00:09:53.963 0000:00:09.0: read successfully as expected 00:09:53.963 0000:00:08.0: read successfully as expected 00:09:53.963 Cleaning up... 00:09:53.963 00:09:53.963 real 0m0.250s 00:09:53.963 user 0m0.112s 00:09:53.963 sys 0m0.090s 00:09:53.963 16:19:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:53.963 ************************************ 00:09:53.963 END TEST nvme_err_injection 00:09:53.963 ************************************ 00:09:53.963 16:19:13 -- common/autotest_common.sh@10 -- # set +x 00:09:53.963 16:19:13 -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:53.963 16:19:13 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:09:53.963 16:19:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:53.963 16:19:13 -- common/autotest_common.sh@10 -- # set +x 00:09:53.963 ************************************ 00:09:53.963 START TEST nvme_overhead 00:09:53.963 ************************************ 00:09:53.963 16:19:13 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:55.340 Initializing NVMe Controllers 00:09:55.340 Attached to 0000:00:06.0 00:09:55.340 Attached to 0000:00:07.0 00:09:55.340 Attached to 0000:00:09.0 00:09:55.340 Attached to 0000:00:08.0 00:09:55.340 Initialization complete. Launching workers. 00:09:55.340 submit (in ns) avg, min, max = 11384.4, 9734.6, 277128.5 00:09:55.340 complete (in ns) avg, min, max = 7565.8, 7202.3, 336676.9 00:09:55.340 00:09:55.340 Submit histogram 00:09:55.340 ================ 00:09:55.340 Range in us Cumulative Count 00:09:55.340 9.698 - 9.748: 0.0056% ( 1) 00:09:55.340 10.880 - 10.929: 0.0506% ( 8) 00:09:55.340 10.929 - 10.978: 0.8160% ( 136) 00:09:55.340 10.978 - 11.028: 6.0551% ( 931) 00:09:55.340 11.028 - 11.077: 20.1857% ( 2511) 00:09:55.340 11.077 - 11.126: 42.0259% ( 3881) 00:09:55.340 11.126 - 11.175: 61.4856% ( 3458) 00:09:55.340 11.175 - 11.225: 74.0293% ( 2229) 00:09:55.340 11.225 - 11.274: 80.8779% ( 1217) 00:09:55.340 11.274 - 11.323: 84.2150% ( 593) 00:09:55.340 11.323 - 11.372: 85.9482% ( 308) 00:09:55.340 11.372 - 11.422: 87.1806% ( 219) 00:09:55.340 11.422 - 11.471: 88.0698% ( 158) 00:09:55.340 11.471 - 11.520: 88.7788% ( 126) 00:09:55.340 11.520 - 11.569: 89.5385% ( 135) 00:09:55.340 11.569 - 11.618: 90.2364% ( 124) 00:09:55.340 11.618 - 11.668: 90.8554% ( 110) 00:09:55.340 11.668 - 11.717: 91.4744% ( 110) 00:09:55.340 11.717 - 11.766: 92.0315% ( 99) 00:09:55.340 11.766 - 11.815: 92.4705% ( 78) 00:09:55.340 11.815 - 11.865: 92.8362% ( 65) 00:09:55.340 11.865 - 11.914: 93.2752% ( 78) 00:09:55.340 11.914 - 11.963: 93.7029% ( 76) 00:09:55.340 11.963 - 12.012: 94.1699% ( 83) 00:09:55.340 12.012 - 12.062: 94.4851% ( 56) 00:09:55.340 12.062 - 12.111: 94.8115% ( 58) 00:09:55.340 12.111 - 12.160: 95.2448% ( 77) 00:09:55.340 12.160 - 12.209: 95.5037% ( 46) 00:09:55.340 12.209 - 12.258: 95.7625% ( 46) 00:09:55.340 12.258 - 12.308: 96.0326% ( 48) 00:09:55.340 12.308 - 12.357: 96.3140% ( 50) 00:09:55.340 12.357 - 12.406: 96.4772% ( 29) 00:09:55.340 12.406 - 12.455: 96.6348% ( 28) 00:09:55.340 12.455 - 12.505: 96.7417% ( 19) 00:09:55.340 12.505 - 12.554: 96.8261% ( 15) 00:09:55.340 12.554 - 12.603: 96.8542% ( 5) 00:09:55.340 12.603 - 12.702: 96.8936% ( 7) 00:09:55.340 12.702 - 12.800: 96.9049% ( 2) 00:09:55.340 12.800 - 12.898: 96.9612% ( 10) 00:09:55.340 12.898 - 12.997: 97.0456% ( 15) 00:09:55.340 12.997 - 13.095: 97.1300% ( 15) 00:09:55.340 13.095 - 13.194: 97.2313% ( 18) 00:09:55.340 13.194 - 13.292: 97.3326% ( 18) 00:09:55.340 13.292 - 13.391: 97.3945% ( 11) 00:09:55.340 13.391 - 13.489: 97.5014% ( 19) 00:09:55.340 13.489 - 13.588: 97.5689% ( 12) 00:09:55.340 13.588 - 13.686: 97.6308% ( 11) 00:09:55.340 13.686 - 13.785: 97.6590% ( 5) 00:09:55.340 13.785 - 13.883: 97.7096% ( 9) 00:09:55.340 13.883 - 13.982: 97.7434% ( 6) 00:09:55.340 13.982 - 14.080: 97.7828% ( 7) 00:09:55.340 14.080 - 14.178: 97.8278% ( 8) 00:09:55.340 14.178 - 14.277: 97.8391% ( 2) 00:09:55.340 14.277 - 14.375: 97.9010% ( 11) 00:09:55.340 14.375 - 14.474: 97.9629% ( 11) 00:09:55.340 14.474 - 14.572: 98.0079% ( 8) 00:09:55.340 14.572 - 14.671: 98.0360% ( 5) 00:09:55.340 14.671 - 14.769: 98.0698% ( 6) 00:09:55.340 14.769 - 14.868: 98.0810% ( 2) 00:09:55.340 14.868 - 14.966: 98.1204% ( 7) 00:09:55.340 14.966 - 15.065: 98.1542% ( 6) 00:09:55.341 15.065 - 15.163: 98.1767% ( 4) 00:09:55.341 15.163 - 15.262: 98.2161% ( 7) 00:09:55.341 15.262 - 15.360: 98.2386% ( 4) 00:09:55.341 15.360 - 15.458: 98.2667% ( 5) 00:09:55.341 15.458 - 15.557: 98.2724% ( 1) 00:09:55.341 15.557 - 15.655: 98.2780% ( 1) 00:09:55.341 15.655 - 15.754: 98.3230% ( 8) 00:09:55.341 15.754 - 15.852: 98.3399% ( 3) 00:09:55.341 15.852 - 15.951: 98.3568% ( 3) 00:09:55.341 15.951 - 16.049: 98.3849% ( 5) 00:09:55.341 16.049 - 16.148: 98.3905% ( 1) 00:09:55.341 16.148 - 16.246: 98.4131% ( 4) 00:09:55.341 16.246 - 16.345: 98.4299% ( 3) 00:09:55.341 16.345 - 16.443: 98.4468% ( 3) 00:09:55.341 16.443 - 16.542: 98.4581% ( 2) 00:09:55.341 16.542 - 16.640: 98.5031% ( 8) 00:09:55.341 16.640 - 16.738: 98.5875% ( 15) 00:09:55.341 16.738 - 16.837: 98.7057% ( 21) 00:09:55.341 16.837 - 16.935: 98.7957% ( 16) 00:09:55.341 16.935 - 17.034: 98.8858% ( 16) 00:09:55.341 17.034 - 17.132: 98.9702% ( 15) 00:09:55.341 17.132 - 17.231: 99.0658% ( 17) 00:09:55.341 17.231 - 17.329: 99.1334% ( 12) 00:09:55.341 17.329 - 17.428: 99.2065% ( 13) 00:09:55.341 17.428 - 17.526: 99.2741% ( 12) 00:09:55.341 17.526 - 17.625: 99.3472% ( 13) 00:09:55.341 17.625 - 17.723: 99.3979% ( 9) 00:09:55.341 17.723 - 17.822: 99.4429% ( 8) 00:09:55.341 17.822 - 17.920: 99.4598% ( 3) 00:09:55.341 17.920 - 18.018: 99.4879% ( 5) 00:09:55.341 18.018 - 18.117: 99.5217% ( 6) 00:09:55.341 18.117 - 18.215: 99.5611% ( 7) 00:09:55.341 18.215 - 18.314: 99.5948% ( 6) 00:09:55.341 18.314 - 18.412: 99.6286% ( 6) 00:09:55.341 18.412 - 18.511: 99.6680% ( 7) 00:09:55.341 18.511 - 18.609: 99.6792% ( 2) 00:09:55.341 18.609 - 18.708: 99.7130% ( 6) 00:09:55.341 18.708 - 18.806: 99.7468% ( 6) 00:09:55.341 18.806 - 18.905: 99.7636% ( 3) 00:09:55.341 18.905 - 19.003: 99.7693% ( 1) 00:09:55.341 19.003 - 19.102: 99.7862% ( 3) 00:09:55.341 19.102 - 19.200: 99.7918% ( 1) 00:09:55.341 19.200 - 19.298: 99.7974% ( 1) 00:09:55.341 19.298 - 19.397: 99.8030% ( 1) 00:09:55.341 19.397 - 19.495: 99.8087% ( 1) 00:09:55.341 19.791 - 19.889: 99.8143% ( 1) 00:09:55.341 20.086 - 20.185: 99.8199% ( 1) 00:09:55.341 20.382 - 20.480: 99.8255% ( 1) 00:09:55.341 20.480 - 20.578: 99.8312% ( 1) 00:09:55.341 20.775 - 20.874: 99.8424% ( 2) 00:09:55.341 20.874 - 20.972: 99.8537% ( 2) 00:09:55.341 20.972 - 21.071: 99.8593% ( 1) 00:09:55.341 21.268 - 21.366: 99.8649% ( 1) 00:09:55.341 22.154 - 22.252: 99.8762% ( 2) 00:09:55.341 22.351 - 22.449: 99.8875% ( 2) 00:09:55.341 22.449 - 22.548: 99.8931% ( 1) 00:09:55.341 22.646 - 22.745: 99.8987% ( 1) 00:09:55.341 22.745 - 22.843: 99.9043% ( 1) 00:09:55.341 22.942 - 23.040: 99.9100% ( 1) 00:09:55.341 23.237 - 23.335: 99.9156% ( 1) 00:09:55.341 23.434 - 23.532: 99.9212% ( 1) 00:09:55.341 23.532 - 23.631: 99.9268% ( 1) 00:09:55.341 24.222 - 24.320: 99.9325% ( 1) 00:09:55.341 24.320 - 24.418: 99.9381% ( 1) 00:09:55.341 24.517 - 24.615: 99.9437% ( 1) 00:09:55.341 24.714 - 24.812: 99.9494% ( 1) 00:09:55.341 24.812 - 24.911: 99.9550% ( 1) 00:09:55.341 26.782 - 26.978: 99.9606% ( 1) 00:09:55.341 33.477 - 33.674: 99.9662% ( 1) 00:09:55.341 39.582 - 39.778: 99.9719% ( 1) 00:09:55.341 40.369 - 40.566: 99.9775% ( 1) 00:09:55.341 57.895 - 58.289: 99.9831% ( 1) 00:09:55.341 66.954 - 67.348: 99.9887% ( 1) 00:09:55.341 67.742 - 68.135: 99.9944% ( 1) 00:09:55.341 275.692 - 277.268: 100.0000% ( 1) 00:09:55.341 00:09:55.341 Complete histogram 00:09:55.341 ================== 00:09:55.341 Range in us Cumulative Count 00:09:55.341 7.188 - 7.237: 0.0732% ( 13) 00:09:55.341 7.237 - 7.286: 1.9640% ( 336) 00:09:55.341 7.286 - 7.335: 13.5678% ( 2062) 00:09:55.341 7.335 - 7.385: 37.2257% ( 4204) 00:09:55.341 7.385 - 7.434: 62.4423% ( 4481) 00:09:55.341 7.434 - 7.483: 79.5104% ( 3033) 00:09:55.341 7.483 - 7.532: 88.8576% ( 1661) 00:09:55.341 7.532 - 7.582: 93.2189% ( 775) 00:09:55.341 7.582 - 7.631: 94.8846% ( 296) 00:09:55.341 7.631 - 7.680: 95.5824% ( 124) 00:09:55.341 7.680 - 7.729: 95.9482% ( 65) 00:09:55.341 7.729 - 7.778: 96.1452% ( 35) 00:09:55.341 7.778 - 7.828: 96.2859% ( 25) 00:09:55.341 7.828 - 7.877: 96.3703% ( 15) 00:09:55.341 7.877 - 7.926: 96.4434% ( 13) 00:09:55.341 7.926 - 7.975: 96.5110% ( 12) 00:09:55.341 7.975 - 8.025: 96.5729% ( 11) 00:09:55.341 8.025 - 8.074: 96.7079% ( 24) 00:09:55.341 8.074 - 8.123: 96.9893% ( 50) 00:09:55.341 8.123 - 8.172: 97.3720% ( 68) 00:09:55.341 8.172 - 8.222: 97.7209% ( 62) 00:09:55.341 8.222 - 8.271: 97.9629% ( 43) 00:09:55.341 8.271 - 8.320: 98.0585% ( 17) 00:09:55.341 8.320 - 8.369: 98.1317% ( 13) 00:09:55.341 8.369 - 8.418: 98.1486% ( 3) 00:09:55.341 8.418 - 8.468: 98.1542% ( 1) 00:09:55.341 8.468 - 8.517: 98.1711% ( 3) 00:09:55.341 8.517 - 8.566: 98.1767% ( 1) 00:09:55.341 8.566 - 8.615: 98.1880% ( 2) 00:09:55.341 8.862 - 8.911: 98.1936% ( 1) 00:09:55.341 9.108 - 9.157: 98.1992% ( 1) 00:09:55.341 9.255 - 9.305: 98.2048% ( 1) 00:09:55.341 9.354 - 9.403: 98.2105% ( 1) 00:09:55.341 9.452 - 9.502: 98.2273% ( 3) 00:09:55.341 9.502 - 9.551: 98.2330% ( 1) 00:09:55.341 9.600 - 9.649: 98.2442% ( 2) 00:09:55.341 9.649 - 9.698: 98.2611% ( 3) 00:09:55.341 9.698 - 9.748: 98.2667% ( 1) 00:09:55.341 9.748 - 9.797: 98.2780% ( 2) 00:09:55.341 9.797 - 9.846: 98.2836% ( 1) 00:09:55.341 9.846 - 9.895: 98.3005% ( 3) 00:09:55.341 9.895 - 9.945: 98.3174% ( 3) 00:09:55.341 9.994 - 10.043: 98.3343% ( 3) 00:09:55.341 10.043 - 10.092: 98.3455% ( 2) 00:09:55.341 10.092 - 10.142: 98.3624% ( 3) 00:09:55.341 10.142 - 10.191: 98.3793% ( 3) 00:09:55.341 10.191 - 10.240: 98.4074% ( 5) 00:09:55.341 10.240 - 10.289: 98.4356% ( 5) 00:09:55.341 10.289 - 10.338: 98.4468% ( 2) 00:09:55.341 10.338 - 10.388: 98.4524% ( 1) 00:09:55.341 10.388 - 10.437: 98.4637% ( 2) 00:09:55.341 10.437 - 10.486: 98.4693% ( 1) 00:09:55.341 10.486 - 10.535: 98.4975% ( 5) 00:09:55.341 10.535 - 10.585: 98.5031% ( 1) 00:09:55.341 10.585 - 10.634: 98.5144% ( 2) 00:09:55.341 10.634 - 10.683: 98.5200% ( 1) 00:09:55.341 10.683 - 10.732: 98.5256% ( 1) 00:09:55.341 10.732 - 10.782: 98.5312% ( 1) 00:09:55.341 10.831 - 10.880: 98.5369% ( 1) 00:09:55.341 10.880 - 10.929: 98.5425% ( 1) 00:09:55.341 10.929 - 10.978: 98.5537% ( 2) 00:09:55.341 10.978 - 11.028: 98.5706% ( 3) 00:09:55.341 11.077 - 11.126: 98.5763% ( 1) 00:09:55.341 11.175 - 11.225: 98.5875% ( 2) 00:09:55.341 11.225 - 11.274: 98.5931% ( 1) 00:09:55.341 11.372 - 11.422: 98.5988% ( 1) 00:09:55.341 11.471 - 11.520: 98.6044% ( 1) 00:09:55.341 11.668 - 11.717: 98.6100% ( 1) 00:09:55.341 11.815 - 11.865: 98.6156% ( 1) 00:09:55.341 11.963 - 12.012: 98.6213% ( 1) 00:09:55.341 12.012 - 12.062: 98.6382% ( 3) 00:09:55.341 12.111 - 12.160: 98.6438% ( 1) 00:09:55.341 12.160 - 12.209: 98.6494% ( 1) 00:09:55.341 12.258 - 12.308: 98.6550% ( 1) 00:09:55.341 12.357 - 12.406: 98.6607% ( 1) 00:09:55.341 12.455 - 12.505: 98.6663% ( 1) 00:09:55.341 12.702 - 12.800: 98.6832% ( 3) 00:09:55.341 12.800 - 12.898: 98.7057% ( 4) 00:09:55.341 12.898 - 12.997: 98.7620% ( 10) 00:09:55.341 12.997 - 13.095: 98.8239% ( 11) 00:09:55.341 13.095 - 13.194: 98.8914% ( 12) 00:09:55.341 13.194 - 13.292: 98.9364% ( 8) 00:09:55.341 13.292 - 13.391: 99.0096% ( 13) 00:09:55.341 13.391 - 13.489: 99.0996% ( 16) 00:09:55.341 13.489 - 13.588: 99.1728% ( 13) 00:09:55.341 13.588 - 13.686: 99.2853% ( 20) 00:09:55.341 13.686 - 13.785: 99.3134% ( 5) 00:09:55.341 13.785 - 13.883: 99.3866% ( 13) 00:09:55.341 13.883 - 13.982: 99.4373% ( 9) 00:09:55.341 13.982 - 14.080: 99.4879% ( 9) 00:09:55.341 14.080 - 14.178: 99.5385% ( 9) 00:09:55.341 14.178 - 14.277: 99.5836% ( 8) 00:09:55.341 14.277 - 14.375: 99.6286% ( 8) 00:09:55.341 14.375 - 14.474: 99.6792% ( 9) 00:09:55.341 14.474 - 14.572: 99.7017% ( 4) 00:09:55.341 14.572 - 14.671: 99.7243% ( 4) 00:09:55.341 14.671 - 14.769: 99.7468% ( 4) 00:09:55.341 14.769 - 14.868: 99.7693% ( 4) 00:09:55.341 14.868 - 14.966: 99.8030% ( 6) 00:09:55.341 14.966 - 15.065: 99.8087% ( 1) 00:09:55.341 15.065 - 15.163: 99.8143% ( 1) 00:09:55.341 15.163 - 15.262: 99.8199% ( 1) 00:09:55.341 15.262 - 15.360: 99.8255% ( 1) 00:09:55.341 15.360 - 15.458: 99.8312% ( 1) 00:09:55.341 15.852 - 15.951: 99.8424% ( 2) 00:09:55.341 16.443 - 16.542: 99.8481% ( 1) 00:09:55.341 16.640 - 16.738: 99.8537% ( 1) 00:09:55.342 17.034 - 17.132: 99.8593% ( 1) 00:09:55.342 17.231 - 17.329: 99.8649% ( 1) 00:09:55.342 17.625 - 17.723: 99.8762% ( 2) 00:09:55.342 18.609 - 18.708: 99.8818% ( 1) 00:09:55.342 19.003 - 19.102: 99.8875% ( 1) 00:09:55.342 19.200 - 19.298: 99.8931% ( 1) 00:09:55.342 19.397 - 19.495: 99.8987% ( 1) 00:09:55.342 19.692 - 19.791: 99.9100% ( 2) 00:09:55.342 20.874 - 20.972: 99.9212% ( 2) 00:09:55.342 20.972 - 21.071: 99.9268% ( 1) 00:09:55.342 21.268 - 21.366: 99.9325% ( 1) 00:09:55.342 22.843 - 22.942: 99.9437% ( 2) 00:09:55.342 23.828 - 23.926: 99.9494% ( 1) 00:09:55.342 25.994 - 26.191: 99.9550% ( 1) 00:09:55.342 30.129 - 30.326: 99.9606% ( 1) 00:09:55.342 32.098 - 32.295: 99.9662% ( 1) 00:09:55.342 35.052 - 35.249: 99.9719% ( 1) 00:09:55.342 41.551 - 41.748: 99.9775% ( 1) 00:09:55.342 50.018 - 50.215: 99.9831% ( 1) 00:09:55.342 59.865 - 60.258: 99.9887% ( 1) 00:09:55.342 70.892 - 71.286: 99.9944% ( 1) 00:09:55.342 335.557 - 337.132: 100.0000% ( 1) 00:09:55.342 00:09:55.342 00:09:55.342 real 0m1.200s 00:09:55.342 user 0m1.066s 00:09:55.342 sys 0m0.093s 00:09:55.342 16:19:14 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:55.342 16:19:14 -- common/autotest_common.sh@10 -- # set +x 00:09:55.342 ************************************ 00:09:55.342 END TEST nvme_overhead 00:09:55.342 ************************************ 00:09:55.342 16:19:14 -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:55.342 16:19:14 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:09:55.342 16:19:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.342 16:19:14 -- common/autotest_common.sh@10 -- # set +x 00:09:55.342 ************************************ 00:09:55.342 START TEST nvme_arbitration 00:09:55.342 ************************************ 00:09:55.342 16:19:14 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:58.639 Initializing NVMe Controllers 00:09:58.639 Attached to 0000:00:06.0 00:09:58.639 Attached to 0000:00:07.0 00:09:58.639 Attached to 0000:00:09.0 00:09:58.639 Attached to 0000:00:08.0 00:09:58.639 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:58.639 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:58.639 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:58.639 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:58.639 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:58.639 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:58.639 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:58.639 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:58.639 Initialization complete. Launching workers. 00:09:58.639 Starting thread on core 1 with urgent priority queue 00:09:58.639 Starting thread on core 2 with urgent priority queue 00:09:58.639 Starting thread on core 3 with urgent priority queue 00:09:58.639 Starting thread on core 0 with urgent priority queue 00:09:58.640 QEMU NVMe Ctrl (12340 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:09:58.640 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:09:58.640 QEMU NVMe Ctrl (12341 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:09:58.640 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:09:58.640 QEMU NVMe Ctrl (12343 ) core 2: 832.00 IO/s 120.19 secs/100000 ios 00:09:58.640 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:09:58.640 ======================================================== 00:09:58.640 00:09:58.640 00:09:58.640 real 0m3.367s 00:09:58.640 user 0m9.449s 00:09:58.640 sys 0m0.110s 00:09:58.640 16:19:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:58.640 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:58.640 ************************************ 00:09:58.640 END TEST nvme_arbitration 00:09:58.640 ************************************ 00:09:58.640 16:19:18 -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:58.640 16:19:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:09:58.640 16:19:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:58.640 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:58.640 ************************************ 00:09:58.640 START TEST nvme_single_aen 00:09:58.640 ************************************ 00:09:58.640 16:19:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 -L log 00:09:58.640 [2024-11-09 16:19:18.332158] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:09:58.640 [2024-11-09 16:19:18.332221] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:58.898 [2024-11-09 16:19:18.462493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:09:58.898 [2024-11-09 16:19:18.463657] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:09:58.898 [2024-11-09 16:19:18.464953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:09:58.898 [2024-11-09 16:19:18.465971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:09:58.898 Asynchronous Event Request test 00:09:58.898 Attached to 0000:00:06.0 00:09:58.898 Attached to 0000:00:07.0 00:09:58.898 Attached to 0000:00:09.0 00:09:58.898 Attached to 0000:00:08.0 00:09:58.898 Reset controller to setup AER completions for this process 00:09:58.898 Registering asynchronous event callbacks... 00:09:58.898 Getting orig temperature thresholds of all controllers 00:09:58.898 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:58.898 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:58.898 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:58.898 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:58.898 Setting all controllers temperature threshold low to trigger AER 00:09:58.898 Waiting for all controllers temperature threshold to be set lower 00:09:58.898 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:58.898 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:09:58.898 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:58.898 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:09:58.898 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:58.898 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:09:58.898 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:58.898 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:09:58.898 Waiting for all controllers to trigger AER and reset threshold 00:09:58.898 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:58.898 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:58.898 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:58.898 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:58.898 Cleaning up... 00:09:58.898 00:09:58.898 real 0m0.197s 00:09:58.898 user 0m0.056s 00:09:58.898 sys 0m0.096s 00:09:58.898 16:19:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:58.898 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:58.898 ************************************ 00:09:58.898 END TEST nvme_single_aen 00:09:58.898 ************************************ 00:09:58.898 16:19:18 -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:58.898 16:19:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:09:58.898 16:19:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:58.898 16:19:18 -- common/autotest_common.sh@10 -- # set +x 00:09:58.898 ************************************ 00:09:58.898 START TEST nvme_doorbell_aers 00:09:58.898 ************************************ 00:09:58.898 16:19:18 -- common/autotest_common.sh@1114 -- # nvme_doorbell_aers 00:09:58.898 16:19:18 -- nvme/nvme.sh@70 -- # bdfs=() 00:09:58.898 16:19:18 -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:58.898 16:19:18 -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:58.898 16:19:18 -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:58.898 16:19:18 -- common/autotest_common.sh@1508 -- # bdfs=() 00:09:58.899 16:19:18 -- common/autotest_common.sh@1508 -- # local bdfs 00:09:58.899 16:19:18 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:58.899 16:19:18 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:58.899 16:19:18 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:09:58.899 16:19:18 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:09:58.899 16:19:18 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:09:58.899 16:19:18 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:58.899 16:19:18 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:06.0' 00:09:59.157 [2024-11-09 16:19:18.758108] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:09.127 Executing: test_write_invalid_db 00:10:09.127 Waiting for AER completion... 00:10:09.127 Failure: test_write_invalid_db 00:10:09.127 00:10:09.127 Executing: test_invalid_db_write_overflow_sq 00:10:09.127 Waiting for AER completion... 00:10:09.127 Failure: test_invalid_db_write_overflow_sq 00:10:09.127 00:10:09.127 Executing: test_invalid_db_write_overflow_cq 00:10:09.127 Waiting for AER completion... 00:10:09.127 Failure: test_invalid_db_write_overflow_cq 00:10:09.127 00:10:09.127 16:19:28 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:09.127 16:19:28 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:07.0' 00:10:09.127 [2024-11-09 16:19:28.821416] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:19.097 Executing: test_write_invalid_db 00:10:19.097 Waiting for AER completion... 00:10:19.097 Failure: test_write_invalid_db 00:10:19.097 00:10:19.097 Executing: test_invalid_db_write_overflow_sq 00:10:19.097 Waiting for AER completion... 00:10:19.097 Failure: test_invalid_db_write_overflow_sq 00:10:19.097 00:10:19.097 Executing: test_invalid_db_write_overflow_cq 00:10:19.097 Waiting for AER completion... 00:10:19.097 Failure: test_invalid_db_write_overflow_cq 00:10:19.097 00:10:19.097 16:19:38 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:19.097 16:19:38 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:08.0' 00:10:19.097 [2024-11-09 16:19:38.851402] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:29.073 Executing: test_write_invalid_db 00:10:29.073 Waiting for AER completion... 00:10:29.073 Failure: test_write_invalid_db 00:10:29.073 00:10:29.073 Executing: test_invalid_db_write_overflow_sq 00:10:29.073 Waiting for AER completion... 00:10:29.073 Failure: test_invalid_db_write_overflow_sq 00:10:29.073 00:10:29.073 Executing: test_invalid_db_write_overflow_cq 00:10:29.073 Waiting for AER completion... 00:10:29.073 Failure: test_invalid_db_write_overflow_cq 00:10:29.073 00:10:29.073 16:19:48 -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:10:29.073 16:19:48 -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:09.0' 00:10:29.335 [2024-11-09 16:19:48.895449] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.302 Executing: test_write_invalid_db 00:10:39.302 Waiting for AER completion... 00:10:39.302 Failure: test_write_invalid_db 00:10:39.302 00:10:39.302 Executing: test_invalid_db_write_overflow_sq 00:10:39.302 Waiting for AER completion... 00:10:39.302 Failure: test_invalid_db_write_overflow_sq 00:10:39.302 00:10:39.302 Executing: test_invalid_db_write_overflow_cq 00:10:39.302 Waiting for AER completion... 00:10:39.302 Failure: test_invalid_db_write_overflow_cq 00:10:39.302 00:10:39.302 00:10:39.302 real 0m40.188s 00:10:39.302 user 0m34.138s 00:10:39.302 sys 0m5.677s 00:10:39.302 16:19:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:39.302 16:19:58 -- common/autotest_common.sh@10 -- # set +x 00:10:39.302 ************************************ 00:10:39.302 END TEST nvme_doorbell_aers 00:10:39.302 ************************************ 00:10:39.302 16:19:58 -- nvme/nvme.sh@97 -- # uname 00:10:39.302 16:19:58 -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:39.302 16:19:58 -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:39.302 16:19:58 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:10:39.302 16:19:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:39.302 16:19:58 -- common/autotest_common.sh@10 -- # set +x 00:10:39.302 ************************************ 00:10:39.302 START TEST nvme_multi_aen 00:10:39.302 ************************************ 00:10:39.302 16:19:58 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 -L log 00:10:39.302 [2024-11-09 16:19:58.802939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:39.302 [2024-11-09 16:19:58.803007] [ DPDK EAL parameters: aer -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.302 [2024-11-09 16:19:58.941984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:39.302 [2024-11-09 16:19:58.942033] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.302 [2024-11-09 16:19:58.942069] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.302 [2024-11-09 16:19:58.942081] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.302 [2024-11-09 16:19:58.943780] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:07.0] resetting controller 00:10:39.302 [2024-11-09 16:19:58.943811] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.302 [2024-11-09 16:19:58.943833] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.302 [2024-11-09 16:19:58.943844] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.302 [2024-11-09 16:19:58.945054] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:09.0] resetting controller 00:10:39.302 [2024-11-09 16:19:58.945076] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.302 [2024-11-09 16:19:58.945095] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.303 [2024-11-09 16:19:58.945106] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.303 [2024-11-09 16:19:58.946309] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:08.0] resetting controller 00:10:39.303 [2024-11-09 16:19:58.946331] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.303 [2024-11-09 16:19:58.946349] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.303 [2024-11-09 16:19:58.946360] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63976) is not found. Dropping the request. 00:10:39.303 [2024-11-09 16:19:58.957026] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:39.303 Child process pid: 64492 00:10:39.303 [2024-11-09 16:19:58.957243] [ DPDK EAL parameters: aer -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.561 [Child] Asynchronous Event Request test 00:10:39.561 [Child] Attached to 0000:00:06.0 00:10:39.561 [Child] Attached to 0000:00:07.0 00:10:39.561 [Child] Attached to 0000:00:09.0 00:10:39.561 [Child] Attached to 0000:00:08.0 00:10:39.561 [Child] Registering asynchronous event callbacks... 00:10:39.561 [Child] Getting orig temperature thresholds of all controllers 00:10:39.561 [Child] 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:39.561 [Child] 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:39.561 [Child] 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:39.561 [Child] 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:39.561 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:39.561 [Child] 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:39.561 [Child] 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:39.561 [Child] 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:39.561 [Child] 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:39.561 [Child] 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:39.561 [Child] 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:39.561 [Child] 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:39.561 [Child] 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:39.561 [Child] Cleaning up... 00:10:39.561 Asynchronous Event Request test 00:10:39.561 Attached to 0000:00:06.0 00:10:39.561 Attached to 0000:00:07.0 00:10:39.561 Attached to 0000:00:09.0 00:10:39.561 Attached to 0000:00:08.0 00:10:39.561 Reset controller to setup AER completions for this process 00:10:39.561 Registering asynchronous event callbacks... 00:10:39.561 Getting orig temperature thresholds of all controllers 00:10:39.561 0000:00:06.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:39.561 0000:00:07.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:39.561 0000:00:09.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:39.561 0000:00:08.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:39.561 Setting all controllers temperature threshold low to trigger AER 00:10:39.561 Waiting for all controllers temperature threshold to be set lower 00:10:39.561 0000:00:06.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:39.561 aer_cb - Resetting Temp Threshold for device: 0000:00:06.0 00:10:39.561 0000:00:07.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:39.561 aer_cb - Resetting Temp Threshold for device: 0000:00:07.0 00:10:39.561 0000:00:09.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:39.561 aer_cb - Resetting Temp Threshold for device: 0000:00:09.0 00:10:39.562 0000:00:08.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:39.562 aer_cb - Resetting Temp Threshold for device: 0000:00:08.0 00:10:39.562 Waiting for all controllers to trigger AER and reset threshold 00:10:39.562 0000:00:06.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:39.562 0000:00:07.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:39.562 0000:00:09.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:39.562 0000:00:08.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:39.562 Cleaning up... 00:10:39.562 00:10:39.562 real 0m0.422s 00:10:39.562 user 0m0.126s 00:10:39.562 sys 0m0.185s 00:10:39.562 16:19:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:39.562 16:19:59 -- common/autotest_common.sh@10 -- # set +x 00:10:39.562 ************************************ 00:10:39.562 END TEST nvme_multi_aen 00:10:39.562 ************************************ 00:10:39.562 16:19:59 -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:39.562 16:19:59 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:39.562 16:19:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:39.562 16:19:59 -- common/autotest_common.sh@10 -- # set +x 00:10:39.562 ************************************ 00:10:39.562 START TEST nvme_startup 00:10:39.562 ************************************ 00:10:39.562 16:19:59 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:39.822 Initializing NVMe Controllers 00:10:39.822 Attached to 0000:00:06.0 00:10:39.822 Attached to 0000:00:07.0 00:10:39.822 Attached to 0000:00:09.0 00:10:39.822 Attached to 0000:00:08.0 00:10:39.822 Initialization complete. 00:10:39.822 Time used:146628.859 (us). 00:10:39.822 00:10:39.822 real 0m0.214s 00:10:39.822 user 0m0.047s 00:10:39.822 sys 0m0.118s 00:10:39.822 16:19:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:39.822 16:19:59 -- common/autotest_common.sh@10 -- # set +x 00:10:39.822 ************************************ 00:10:39.822 END TEST nvme_startup 00:10:39.822 ************************************ 00:10:39.822 16:19:59 -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:39.822 16:19:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:39.822 16:19:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:39.822 16:19:59 -- common/autotest_common.sh@10 -- # set +x 00:10:39.822 ************************************ 00:10:39.822 START TEST nvme_multi_secondary 00:10:39.823 ************************************ 00:10:39.823 16:19:59 -- common/autotest_common.sh@1114 -- # nvme_multi_secondary 00:10:39.823 16:19:59 -- nvme/nvme.sh@52 -- # pid0=64548 00:10:39.823 16:19:59 -- nvme/nvme.sh@54 -- # pid1=64549 00:10:39.823 16:19:59 -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:39.823 16:19:59 -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:39.823 16:19:59 -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:43.119 Initializing NVMe Controllers 00:10:43.119 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:43.119 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:43.119 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:43.119 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:43.119 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:43.119 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:43.119 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:43.119 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:43.119 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:43.119 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:43.119 Initialization complete. Launching workers. 00:10:43.119 ======================================================== 00:10:43.119 Latency(us) 00:10:43.119 Device Information : IOPS MiB/s Average min max 00:10:43.119 PCIE (0000:00:06.0) NSID 1 from core 1: 4110.74 16.06 3890.49 879.82 13039.80 00:10:43.119 PCIE (0000:00:07.0) NSID 1 from core 1: 4110.74 16.06 3891.92 757.94 11475.34 00:10:43.119 PCIE (0000:00:09.0) NSID 1 from core 1: 4110.74 16.06 3891.86 942.16 11506.90 00:10:43.119 PCIE (0000:00:08.0) NSID 1 from core 1: 4110.74 16.06 3891.84 917.01 11544.57 00:10:43.119 PCIE (0000:00:08.0) NSID 2 from core 1: 4110.74 16.06 3891.82 927.42 12147.48 00:10:43.119 PCIE (0000:00:08.0) NSID 3 from core 1: 4116.08 16.08 3886.78 911.60 12948.81 00:10:43.119 ======================================================== 00:10:43.119 Total : 24669.80 96.37 3890.78 757.94 13039.80 00:10:43.119 00:10:43.380 Initializing NVMe Controllers 00:10:43.380 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:43.380 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:43.380 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:43.380 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:43.380 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:43.380 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:43.380 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:43.380 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:43.380 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:43.380 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:43.380 Initialization complete. Launching workers. 00:10:43.380 ======================================================== 00:10:43.380 Latency(us) 00:10:43.380 Device Information : IOPS MiB/s Average min max 00:10:43.380 PCIE (0000:00:06.0) NSID 1 from core 2: 1603.43 6.26 9975.77 1068.17 27360.54 00:10:43.380 PCIE (0000:00:07.0) NSID 1 from core 2: 1603.43 6.26 9979.16 990.63 26841.64 00:10:43.380 PCIE (0000:00:09.0) NSID 1 from core 2: 1603.43 6.26 9976.53 1234.29 27924.90 00:10:43.380 PCIE (0000:00:08.0) NSID 1 from core 2: 1603.43 6.26 9963.82 1336.08 25795.74 00:10:43.380 PCIE (0000:00:08.0) NSID 2 from core 2: 1603.43 6.26 9962.99 1370.48 25571.75 00:10:43.380 PCIE (0000:00:08.0) NSID 3 from core 2: 1603.43 6.26 9963.11 1121.81 22931.16 00:10:43.380 ======================================================== 00:10:43.380 Total : 9620.55 37.58 9970.23 990.63 27924.90 00:10:43.380 00:10:43.380 16:20:03 -- nvme/nvme.sh@56 -- # wait 64548 00:10:45.292 Initializing NVMe Controllers 00:10:45.292 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:45.292 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:45.292 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:45.292 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:45.292 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:45.292 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:45.292 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:45.292 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:45.292 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:45.292 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:45.292 Initialization complete. Launching workers. 00:10:45.292 ======================================================== 00:10:45.292 Latency(us) 00:10:45.292 Device Information : IOPS MiB/s Average min max 00:10:45.292 PCIE (0000:00:06.0) NSID 1 from core 0: 6025.44 23.54 2654.08 716.86 13582.18 00:10:45.292 PCIE (0000:00:07.0) NSID 1 from core 0: 6025.44 23.54 2655.03 733.44 12904.24 00:10:45.292 PCIE (0000:00:09.0) NSID 1 from core 0: 6025.44 23.54 2655.00 743.74 11568.63 00:10:45.292 PCIE (0000:00:08.0) NSID 1 from core 0: 6025.44 23.54 2654.97 735.89 13202.97 00:10:45.292 PCIE (0000:00:08.0) NSID 2 from core 0: 6025.44 23.54 2654.95 743.56 12628.59 00:10:45.292 PCIE (0000:00:08.0) NSID 3 from core 0: 6028.64 23.55 2653.51 734.68 12900.16 00:10:45.292 ======================================================== 00:10:45.292 Total : 36155.82 141.23 2654.59 716.86 13582.18 00:10:45.292 00:10:45.292 16:20:04 -- nvme/nvme.sh@57 -- # wait 64549 00:10:45.292 16:20:04 -- nvme/nvme.sh@61 -- # pid0=64618 00:10:45.292 16:20:04 -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:45.292 16:20:04 -- nvme/nvme.sh@63 -- # pid1=64619 00:10:45.292 16:20:04 -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:45.292 16:20:04 -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:48.593 Initializing NVMe Controllers 00:10:48.593 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:48.593 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:48.593 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:48.593 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:48.593 Associating PCIE (0000:00:06.0) NSID 1 with lcore 1 00:10:48.593 Associating PCIE (0000:00:07.0) NSID 1 with lcore 1 00:10:48.593 Associating PCIE (0000:00:09.0) NSID 1 with lcore 1 00:10:48.593 Associating PCIE (0000:00:08.0) NSID 1 with lcore 1 00:10:48.593 Associating PCIE (0000:00:08.0) NSID 2 with lcore 1 00:10:48.593 Associating PCIE (0000:00:08.0) NSID 3 with lcore 1 00:10:48.593 Initialization complete. Launching workers. 00:10:48.593 ======================================================== 00:10:48.593 Latency(us) 00:10:48.593 Device Information : IOPS MiB/s Average min max 00:10:48.593 PCIE (0000:00:06.0) NSID 1 from core 1: 2932.23 11.45 5454.73 861.47 13197.29 00:10:48.593 PCIE (0000:00:07.0) NSID 1 from core 1: 2932.23 11.45 5457.17 875.94 15751.08 00:10:48.593 PCIE (0000:00:09.0) NSID 1 from core 1: 2932.23 11.45 5457.83 873.91 14863.39 00:10:48.593 PCIE (0000:00:08.0) NSID 1 from core 1: 2932.23 11.45 5457.81 867.51 14714.19 00:10:48.593 PCIE (0000:00:08.0) NSID 2 from core 1: 2932.23 11.45 5459.72 860.52 14144.98 00:10:48.593 PCIE (0000:00:08.0) NSID 3 from core 1: 2932.23 11.45 5460.25 877.80 14568.96 00:10:48.593 ======================================================== 00:10:48.593 Total : 17593.37 68.72 5457.92 860.52 15751.08 00:10:48.593 00:10:48.593 Initializing NVMe Controllers 00:10:48.594 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:48.594 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:48.594 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:48.594 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:48.594 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:10:48.594 Associating PCIE (0000:00:07.0) NSID 1 with lcore 0 00:10:48.594 Associating PCIE (0000:00:09.0) NSID 1 with lcore 0 00:10:48.594 Associating PCIE (0000:00:08.0) NSID 1 with lcore 0 00:10:48.594 Associating PCIE (0000:00:08.0) NSID 2 with lcore 0 00:10:48.594 Associating PCIE (0000:00:08.0) NSID 3 with lcore 0 00:10:48.594 Initialization complete. Launching workers. 00:10:48.594 ======================================================== 00:10:48.594 Latency(us) 00:10:48.594 Device Information : IOPS MiB/s Average min max 00:10:48.594 PCIE (0000:00:06.0) NSID 1 from core 0: 3065.28 11.97 5217.94 1104.08 13092.22 00:10:48.594 PCIE (0000:00:07.0) NSID 1 from core 0: 3065.28 11.97 5221.32 1245.10 14350.78 00:10:48.594 PCIE (0000:00:09.0) NSID 1 from core 0: 3065.28 11.97 5221.46 1166.19 14313.35 00:10:48.594 PCIE (0000:00:08.0) NSID 1 from core 0: 3065.28 11.97 5221.40 1127.33 13878.49 00:10:48.594 PCIE (0000:00:08.0) NSID 2 from core 0: 3065.28 11.97 5221.28 1258.80 12448.50 00:10:48.594 PCIE (0000:00:08.0) NSID 3 from core 0: 3070.61 11.99 5212.09 1258.41 13895.11 00:10:48.594 ======================================================== 00:10:48.594 Total : 18397.01 71.86 5219.25 1104.08 14350.78 00:10:48.594 00:10:50.509 Initializing NVMe Controllers 00:10:50.509 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:10:50.509 Attached to NVMe Controller at 0000:00:07.0 [1b36:0010] 00:10:50.509 Attached to NVMe Controller at 0000:00:09.0 [1b36:0010] 00:10:50.509 Attached to NVMe Controller at 0000:00:08.0 [1b36:0010] 00:10:50.509 Associating PCIE (0000:00:06.0) NSID 1 with lcore 2 00:10:50.509 Associating PCIE (0000:00:07.0) NSID 1 with lcore 2 00:10:50.509 Associating PCIE (0000:00:09.0) NSID 1 with lcore 2 00:10:50.509 Associating PCIE (0000:00:08.0) NSID 1 with lcore 2 00:10:50.509 Associating PCIE (0000:00:08.0) NSID 2 with lcore 2 00:10:50.509 Associating PCIE (0000:00:08.0) NSID 3 with lcore 2 00:10:50.509 Initialization complete. Launching workers. 00:10:50.509 ======================================================== 00:10:50.509 Latency(us) 00:10:50.509 Device Information : IOPS MiB/s Average min max 00:10:50.509 PCIE (0000:00:06.0) NSID 1 from core 2: 1705.04 6.66 9381.58 726.88 36528.50 00:10:50.509 PCIE (0000:00:07.0) NSID 1 from core 2: 1705.04 6.66 9382.95 754.95 30527.77 00:10:50.509 PCIE (0000:00:09.0) NSID 1 from core 2: 1705.04 6.66 9383.30 740.26 34819.63 00:10:50.509 PCIE (0000:00:08.0) NSID 1 from core 2: 1705.04 6.66 9383.20 748.94 34338.80 00:10:50.509 PCIE (0000:00:08.0) NSID 2 from core 2: 1705.04 6.66 9383.09 624.09 34554.03 00:10:50.509 PCIE (0000:00:08.0) NSID 3 from core 2: 1705.04 6.66 9383.00 624.44 34798.49 00:10:50.509 ======================================================== 00:10:50.509 Total : 10230.21 39.96 9382.85 624.09 36528.50 00:10:50.509 00:10:50.769 16:20:10 -- nvme/nvme.sh@65 -- # wait 64618 00:10:50.769 16:20:10 -- nvme/nvme.sh@66 -- # wait 64619 00:10:50.769 00:10:50.769 real 0m10.808s 00:10:50.769 user 0m18.527s 00:10:50.769 sys 0m0.747s 00:10:50.769 16:20:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:10:50.769 ************************************ 00:10:50.769 END TEST nvme_multi_secondary 00:10:50.769 16:20:10 -- common/autotest_common.sh@10 -- # set +x 00:10:50.769 ************************************ 00:10:50.769 16:20:10 -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:50.769 16:20:10 -- nvme/nvme.sh@102 -- # kill_stub 00:10:50.769 16:20:10 -- common/autotest_common.sh@1075 -- # [[ -e /proc/63564 ]] 00:10:50.769 16:20:10 -- common/autotest_common.sh@1076 -- # kill 63564 00:10:50.769 16:20:10 -- common/autotest_common.sh@1077 -- # wait 63564 00:10:51.710 [2024-11-09 16:20:11.251469] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:51.710 [2024-11-09 16:20:11.251711] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:51.710 [2024-11-09 16:20:11.251731] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:51.710 [2024-11-09 16:20:11.251742] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:52.655 [2024-11-09 16:20:12.262481] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:52.655 [2024-11-09 16:20:12.262719] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:52.655 [2024-11-09 16:20:12.262737] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:52.655 [2024-11-09 16:20:12.262750] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:53.600 [2024-11-09 16:20:13.271286] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:53.600 [2024-11-09 16:20:13.271362] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:53.600 [2024-11-09 16:20:13.271375] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:53.600 [2024-11-09 16:20:13.271387] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:55.521 [2024-11-09 16:20:14.779955] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:55.521 [2024-11-09 16:20:14.780048] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:55.521 [2024-11-09 16:20:14.780060] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:55.521 [2024-11-09 16:20:14.780076] nvme_pcie_common.c: 292:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64491) is not found. Dropping the request. 00:10:55.521 16:20:14 -- common/autotest_common.sh@1079 -- # rm -f /var/run/spdk_stub0 00:10:55.521 16:20:14 -- common/autotest_common.sh@1083 -- # echo 2 00:10:55.521 16:20:14 -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:55.521 16:20:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:55.521 16:20:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.521 16:20:14 -- common/autotest_common.sh@10 -- # set +x 00:10:55.521 ************************************ 00:10:55.521 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:55.521 ************************************ 00:10:55.521 16:20:15 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:55.521 * Looking for test storage... 00:10:55.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:55.521 16:20:15 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:10:55.521 16:20:15 -- common/autotest_common.sh@1690 -- # lcov --version 00:10:55.521 16:20:15 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:10:55.521 16:20:15 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:10:55.521 16:20:15 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:10:55.521 16:20:15 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:10:55.521 16:20:15 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:10:55.521 16:20:15 -- scripts/common.sh@335 -- # IFS=.-: 00:10:55.521 16:20:15 -- scripts/common.sh@335 -- # read -ra ver1 00:10:55.521 16:20:15 -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.521 16:20:15 -- scripts/common.sh@336 -- # read -ra ver2 00:10:55.521 16:20:15 -- scripts/common.sh@337 -- # local 'op=<' 00:10:55.521 16:20:15 -- scripts/common.sh@339 -- # ver1_l=2 00:10:55.521 16:20:15 -- scripts/common.sh@340 -- # ver2_l=1 00:10:55.521 16:20:15 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:10:55.521 16:20:15 -- scripts/common.sh@343 -- # case "$op" in 00:10:55.521 16:20:15 -- scripts/common.sh@344 -- # : 1 00:10:55.521 16:20:15 -- scripts/common.sh@363 -- # (( v = 0 )) 00:10:55.521 16:20:15 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.521 16:20:15 -- scripts/common.sh@364 -- # decimal 1 00:10:55.521 16:20:15 -- scripts/common.sh@352 -- # local d=1 00:10:55.521 16:20:15 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.521 16:20:15 -- scripts/common.sh@354 -- # echo 1 00:10:55.521 16:20:15 -- scripts/common.sh@364 -- # ver1[v]=1 00:10:55.521 16:20:15 -- scripts/common.sh@365 -- # decimal 2 00:10:55.521 16:20:15 -- scripts/common.sh@352 -- # local d=2 00:10:55.521 16:20:15 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.521 16:20:15 -- scripts/common.sh@354 -- # echo 2 00:10:55.521 16:20:15 -- scripts/common.sh@365 -- # ver2[v]=2 00:10:55.521 16:20:15 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:10:55.521 16:20:15 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:10:55.521 16:20:15 -- scripts/common.sh@367 -- # return 0 00:10:55.521 16:20:15 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.521 16:20:15 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:10:55.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.521 --rc genhtml_branch_coverage=1 00:10:55.521 --rc genhtml_function_coverage=1 00:10:55.521 --rc genhtml_legend=1 00:10:55.521 --rc geninfo_all_blocks=1 00:10:55.521 --rc geninfo_unexecuted_blocks=1 00:10:55.521 00:10:55.521 ' 00:10:55.521 16:20:15 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:10:55.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.521 --rc genhtml_branch_coverage=1 00:10:55.521 --rc genhtml_function_coverage=1 00:10:55.522 --rc genhtml_legend=1 00:10:55.522 --rc geninfo_all_blocks=1 00:10:55.522 --rc geninfo_unexecuted_blocks=1 00:10:55.522 00:10:55.522 ' 00:10:55.522 16:20:15 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:10:55.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.522 --rc genhtml_branch_coverage=1 00:10:55.522 --rc genhtml_function_coverage=1 00:10:55.522 --rc genhtml_legend=1 00:10:55.522 --rc geninfo_all_blocks=1 00:10:55.522 --rc geninfo_unexecuted_blocks=1 00:10:55.522 00:10:55.522 ' 00:10:55.522 16:20:15 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:10:55.522 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.522 --rc genhtml_branch_coverage=1 00:10:55.522 --rc genhtml_function_coverage=1 00:10:55.522 --rc genhtml_legend=1 00:10:55.522 --rc geninfo_all_blocks=1 00:10:55.522 --rc geninfo_unexecuted_blocks=1 00:10:55.522 00:10:55.522 ' 00:10:55.522 16:20:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:55.522 16:20:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:55.522 16:20:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:55.522 16:20:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:55.522 16:20:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:55.522 16:20:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:55.522 16:20:15 -- common/autotest_common.sh@1519 -- # bdfs=() 00:10:55.522 16:20:15 -- common/autotest_common.sh@1519 -- # local bdfs 00:10:55.522 16:20:15 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:10:55.522 16:20:15 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:10:55.522 16:20:15 -- common/autotest_common.sh@1508 -- # bdfs=() 00:10:55.522 16:20:15 -- common/autotest_common.sh@1508 -- # local bdfs 00:10:55.522 16:20:15 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:55.522 16:20:15 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:55.522 16:20:15 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:10:55.522 16:20:15 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:10:55.522 16:20:15 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:10:55.522 16:20:15 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:10:55.522 16:20:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:06.0 00:10:55.522 16:20:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:06.0 ']' 00:10:55.522 16:20:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64831 00:10:55.522 16:20:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:55.522 16:20:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:55.522 16:20:15 -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64831 00:10:55.522 16:20:15 -- common/autotest_common.sh@829 -- # '[' -z 64831 ']' 00:10:55.522 16:20:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.522 16:20:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:55.522 16:20:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.522 16:20:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:55.522 16:20:15 -- common/autotest_common.sh@10 -- # set +x 00:10:55.784 [2024-11-09 16:20:15.300628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:10:55.784 [2024-11-09 16:20:15.300776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64831 ] 00:10:55.784 [2024-11-09 16:20:15.462832] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:56.046 [2024-11-09 16:20:15.687136] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:56.046 [2024-11-09 16:20:15.687538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.046 [2024-11-09 16:20:15.687810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:56.046 [2024-11-09 16:20:15.689138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:56.046 [2024-11-09 16:20:15.689277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.436 16:20:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:57.436 16:20:16 -- common/autotest_common.sh@862 -- # return 0 00:10:57.436 16:20:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:06.0 00:10:57.436 16:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.436 16:20:16 -- common/autotest_common.sh@10 -- # set +x 00:10:57.436 nvme0n1 00:10:57.436 16:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.436 16:20:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:57.436 16:20:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_82w89.txt 00:10:57.436 16:20:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:57.436 16:20:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.436 16:20:16 -- common/autotest_common.sh@10 -- # set +x 00:10:57.436 true 00:10:57.436 16:20:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.436 16:20:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:57.436 16:20:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1731169216 00:10:57.436 16:20:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64856 00:10:57.436 16:20:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:57.436 16:20:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:57.436 16:20:16 -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:59.345 16:20:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:59.346 16:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.346 16:20:18 -- common/autotest_common.sh@10 -- # set +x 00:10:59.346 [2024-11-09 16:20:18.930942] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:06.0] resetting controller 00:10:59.346 [2024-11-09 16:20:18.931154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:59.346 [2024-11-09 16:20:18.931175] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:59.346 [2024-11-09 16:20:18.931186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:59.346 [2024-11-09 16:20:18.932910] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:10:59.346 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64856 00:10:59.346 16:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.346 16:20:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64856 00:10:59.346 16:20:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64856 00:10:59.346 16:20:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:59.346 16:20:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:59.346 16:20:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:59.346 16:20:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.346 16:20:18 -- common/autotest_common.sh@10 -- # set +x 00:10:59.346 16:20:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.346 16:20:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:59.346 16:20:18 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_82w89.txt 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_82w89.txt 00:10:59.346 16:20:19 -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64831 00:10:59.346 16:20:19 -- common/autotest_common.sh@936 -- # '[' -z 64831 ']' 00:10:59.346 16:20:19 -- common/autotest_common.sh@940 -- # kill -0 64831 00:10:59.346 16:20:19 -- common/autotest_common.sh@941 -- # uname 00:10:59.346 16:20:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:59.346 16:20:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64831 00:10:59.346 killing process with pid 64831 00:10:59.346 16:20:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:59.346 16:20:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:59.346 16:20:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64831' 00:10:59.346 16:20:19 -- common/autotest_common.sh@955 -- # kill 64831 00:10:59.346 16:20:19 -- common/autotest_common.sh@960 -- # wait 64831 00:11:00.721 16:20:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:11:00.721 16:20:20 -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:11:00.721 00:11:00.721 real 0m5.317s 00:11:00.721 user 0m18.434s 00:11:00.721 sys 0m0.720s 00:11:00.721 16:20:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:00.721 ************************************ 00:11:00.721 END TEST bdev_nvme_reset_stuck_adm_cmd 00:11:00.721 ************************************ 00:11:00.721 16:20:20 -- common/autotest_common.sh@10 -- # set +x 00:11:00.721 16:20:20 -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:11:00.721 16:20:20 -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:11:00.721 16:20:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:00.721 16:20:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:00.721 16:20:20 -- common/autotest_common.sh@10 -- # set +x 00:11:00.721 ************************************ 00:11:00.721 START TEST nvme_fio 00:11:00.721 ************************************ 00:11:00.721 16:20:20 -- common/autotest_common.sh@1114 -- # nvme_fio_test 00:11:00.721 16:20:20 -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:11:00.721 16:20:20 -- nvme/nvme.sh@32 -- # ran_fio=false 00:11:00.721 16:20:20 -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:11:00.721 16:20:20 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:00.721 16:20:20 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:00.721 16:20:20 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:00.721 16:20:20 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:00.721 16:20:20 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:00.721 16:20:20 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:00.721 16:20:20 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:00.721 16:20:20 -- nvme/nvme.sh@33 -- # bdfs=('0000:00:06.0' '0000:00:07.0' '0000:00:08.0' '0000:00:09.0') 00:11:00.721 16:20:20 -- nvme/nvme.sh@33 -- # local bdfs bdf 00:11:00.721 16:20:20 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:00.721 16:20:20 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:00.721 16:20:20 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:00.981 16:20:20 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:00.981 16:20:20 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' 00:11:01.243 16:20:20 -- nvme/nvme.sh@41 -- # bs=4096 00:11:01.243 16:20:20 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:01.243 16:20:20 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:01.243 16:20:20 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:01.243 16:20:20 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:01.243 16:20:20 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:01.243 16:20:20 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:01.243 16:20:20 -- common/autotest_common.sh@1330 -- # shift 00:11:01.243 16:20:20 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:01.243 16:20:20 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:01.243 16:20:20 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:01.243 16:20:20 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:01.243 16:20:20 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:01.243 16:20:20 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:01.243 16:20:20 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:01.243 16:20:20 -- common/autotest_common.sh@1336 -- # break 00:11:01.243 16:20:20 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:01.243 16:20:20 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.06.0' --bs=4096 00:11:01.504 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:01.504 fio-3.35 00:11:01.504 Starting 1 thread 00:11:06.793 00:11:06.793 test: (groupid=0, jobs=1): err= 0: pid=64994: Sat Nov 9 16:20:26 2024 00:11:06.793 read: IOPS=19.1k, BW=74.5MiB/s (78.1MB/s)(149MiB/2001msec) 00:11:06.793 slat (usec): min=4, max=124, avg= 6.18, stdev= 2.58 00:11:06.793 clat (usec): min=943, max=11464, avg=3325.74, stdev=1026.90 00:11:06.793 lat (usec): min=949, max=11470, avg=3331.92, stdev=1028.08 00:11:06.793 clat percentiles (usec): 00:11:06.793 | 1.00th=[ 2245], 5.00th=[ 2442], 10.00th=[ 2540], 20.00th=[ 2671], 00:11:06.793 | 30.00th=[ 2802], 40.00th=[ 2868], 50.00th=[ 2999], 60.00th=[ 3130], 00:11:06.793 | 70.00th=[ 3326], 80.00th=[ 3720], 90.00th=[ 4621], 95.00th=[ 5538], 00:11:06.793 | 99.00th=[ 7504], 99.50th=[ 7963], 99.90th=[ 8979], 99.95th=[ 9110], 00:11:06.793 | 99.99th=[11207] 00:11:06.794 bw ( KiB/s): min=72112, max=81048, per=99.27%, avg=75752.00, stdev=4692.52, samples=3 00:11:06.794 iops : min=18028, max=20262, avg=18938.00, stdev=1173.13, samples=3 00:11:06.794 write: IOPS=19.1k, BW=74.5MiB/s (78.1MB/s)(149MiB/2001msec); 0 zone resets 00:11:06.794 slat (usec): min=5, max=179, avg= 6.59, stdev= 2.71 00:11:06.794 clat (usec): min=953, max=11513, avg=3357.98, stdev=1020.72 00:11:06.794 lat (usec): min=959, max=11519, avg=3364.57, stdev=1021.95 00:11:06.794 clat percentiles (usec): 00:11:06.794 | 1.00th=[ 2278], 5.00th=[ 2474], 10.00th=[ 2573], 20.00th=[ 2704], 00:11:06.794 | 30.00th=[ 2802], 40.00th=[ 2900], 50.00th=[ 3032], 60.00th=[ 3163], 00:11:06.794 | 70.00th=[ 3359], 80.00th=[ 3752], 90.00th=[ 4686], 95.00th=[ 5473], 00:11:06.794 | 99.00th=[ 7504], 99.50th=[ 7963], 99.90th=[ 8848], 99.95th=[ 8979], 00:11:06.794 | 99.99th=[11338] 00:11:06.794 bw ( KiB/s): min=72032, max=81272, per=99.37%, avg=75770.67, stdev=4865.66, samples=3 00:11:06.794 iops : min=18008, max=20318, avg=18942.67, stdev=1216.41, samples=3 00:11:06.794 lat (usec) : 1000=0.01% 00:11:06.794 lat (msec) : 2=0.34%, 4=83.51%, 10=16.10%, 20=0.04% 00:11:06.794 cpu : usr=98.60%, sys=0.35%, ctx=6, majf=0, minf=608 00:11:06.794 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:06.794 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.794 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:06.794 issued rwts: total=38174,38145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.794 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:06.794 00:11:06.794 Run status group 0 (all jobs): 00:11:06.794 READ: bw=74.5MiB/s (78.1MB/s), 74.5MiB/s-74.5MiB/s (78.1MB/s-78.1MB/s), io=149MiB (156MB), run=2001-2001msec 00:11:06.794 WRITE: bw=74.5MiB/s (78.1MB/s), 74.5MiB/s-74.5MiB/s (78.1MB/s-78.1MB/s), io=149MiB (156MB), run=2001-2001msec 00:11:07.056 ----------------------------------------------------- 00:11:07.056 Suppressions used: 00:11:07.056 count bytes template 00:11:07.056 1 32 /usr/src/fio/parse.c 00:11:07.056 1 8 libtcmalloc_minimal.so 00:11:07.056 ----------------------------------------------------- 00:11:07.056 00:11:07.056 16:20:26 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:07.056 16:20:26 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:07.056 16:20:26 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:07.056 16:20:26 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:07.317 16:20:26 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:07.317 16:20:26 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:07.0' 00:11:07.579 16:20:27 -- nvme/nvme.sh@41 -- # bs=4096 00:11:07.579 16:20:27 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:07.579 16:20:27 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:07.579 16:20:27 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:07.579 16:20:27 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:07.579 16:20:27 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:07.579 16:20:27 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:07.579 16:20:27 -- common/autotest_common.sh@1330 -- # shift 00:11:07.579 16:20:27 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:07.579 16:20:27 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:07.579 16:20:27 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:07.579 16:20:27 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:07.579 16:20:27 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:07.579 16:20:27 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:07.579 16:20:27 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:07.579 16:20:27 -- common/autotest_common.sh@1336 -- # break 00:11:07.579 16:20:27 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:07.579 16:20:27 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.07.0' --bs=4096 00:11:07.579 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:07.579 fio-3.35 00:11:07.579 Starting 1 thread 00:11:11.781 00:11:11.781 test: (groupid=0, jobs=1): err= 0: pid=65087: Sat Nov 9 16:20:31 2024 00:11:11.781 read: IOPS=13.0k, BW=50.7MiB/s (53.2MB/s)(101MiB/2001msec) 00:11:11.781 slat (nsec): min=4781, max=85050, avg=7111.95, stdev=4472.82 00:11:11.781 clat (usec): min=959, max=13058, avg=4894.32, stdev=1545.83 00:11:11.781 lat (usec): min=966, max=13134, avg=4901.44, stdev=1547.21 00:11:11.781 clat percentiles (usec): 00:11:11.781 | 1.00th=[ 2638], 5.00th=[ 2966], 10.00th=[ 3163], 20.00th=[ 3458], 00:11:11.781 | 30.00th=[ 3687], 40.00th=[ 4015], 50.00th=[ 4621], 60.00th=[ 5276], 00:11:11.781 | 70.00th=[ 5800], 80.00th=[ 6325], 90.00th=[ 7046], 95.00th=[ 7570], 00:11:11.781 | 99.00th=[ 8848], 99.50th=[ 9241], 99.90th=[10552], 99.95th=[11076], 00:11:11.781 | 99.99th=[13042] 00:11:11.781 bw ( KiB/s): min=44184, max=58104, per=100.00%, avg=52770.67, stdev=7508.64, samples=3 00:11:11.781 iops : min=11046, max=14526, avg=13192.67, stdev=1877.16, samples=3 00:11:11.781 write: IOPS=13.0k, BW=50.6MiB/s (53.1MB/s)(101MiB/2001msec); 0 zone resets 00:11:11.781 slat (nsec): min=4966, max=88095, avg=7354.33, stdev=4502.17 00:11:11.781 clat (usec): min=984, max=12987, avg=4933.64, stdev=1547.63 00:11:11.781 lat (usec): min=990, max=13001, avg=4941.00, stdev=1549.04 00:11:11.781 clat percentiles (usec): 00:11:11.781 | 1.00th=[ 2671], 5.00th=[ 2999], 10.00th=[ 3195], 20.00th=[ 3458], 00:11:11.781 | 30.00th=[ 3720], 40.00th=[ 4080], 50.00th=[ 4686], 60.00th=[ 5276], 00:11:11.781 | 70.00th=[ 5866], 80.00th=[ 6390], 90.00th=[ 7111], 95.00th=[ 7635], 00:11:11.781 | 99.00th=[ 8848], 99.50th=[ 9241], 99.90th=[10290], 99.95th=[10945], 00:11:11.781 | 99.99th=[12911] 00:11:11.781 bw ( KiB/s): min=44480, max=58360, per=100.00%, avg=52840.00, stdev=7362.93, samples=3 00:11:11.781 iops : min=11120, max=14590, avg=13210.00, stdev=1840.73, samples=3 00:11:11.782 lat (usec) : 1000=0.01% 00:11:11.782 lat (msec) : 2=0.10%, 4=38.91%, 10=60.80%, 20=0.18% 00:11:11.782 cpu : usr=98.20%, sys=0.15%, ctx=4, majf=0, minf=608 00:11:11.782 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:11.782 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:11.782 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:11.782 issued rwts: total=25972,25938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:11.782 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:11.782 00:11:11.782 Run status group 0 (all jobs): 00:11:11.782 READ: bw=50.7MiB/s (53.2MB/s), 50.7MiB/s-50.7MiB/s (53.2MB/s-53.2MB/s), io=101MiB (106MB), run=2001-2001msec 00:11:11.782 WRITE: bw=50.6MiB/s (53.1MB/s), 50.6MiB/s-50.6MiB/s (53.1MB/s-53.1MB/s), io=101MiB (106MB), run=2001-2001msec 00:11:12.042 ----------------------------------------------------- 00:11:12.042 Suppressions used: 00:11:12.042 count bytes template 00:11:12.042 1 32 /usr/src/fio/parse.c 00:11:12.042 1 8 libtcmalloc_minimal.so 00:11:12.042 ----------------------------------------------------- 00:11:12.042 00:11:12.042 16:20:31 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:12.042 16:20:31 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:12.042 16:20:31 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:12.042 16:20:31 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:12.303 16:20:31 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:08.0' 00:11:12.303 16:20:31 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:12.565 16:20:32 -- nvme/nvme.sh@41 -- # bs=4096 00:11:12.565 16:20:32 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:12.565 16:20:32 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:12.565 16:20:32 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:12.565 16:20:32 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:12.565 16:20:32 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:12.565 16:20:32 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:12.565 16:20:32 -- common/autotest_common.sh@1330 -- # shift 00:11:12.565 16:20:32 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:12.565 16:20:32 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:12.565 16:20:32 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:12.565 16:20:32 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:12.565 16:20:32 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:12.565 16:20:32 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:12.565 16:20:32 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:12.565 16:20:32 -- common/autotest_common.sh@1336 -- # break 00:11:12.565 16:20:32 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:12.565 16:20:32 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.08.0' --bs=4096 00:11:12.565 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:12.565 fio-3.35 00:11:12.565 Starting 1 thread 00:11:17.849 00:11:17.850 test: (groupid=0, jobs=1): err= 0: pid=65164: Sat Nov 9 16:20:37 2024 00:11:17.850 read: IOPS=15.7k, BW=61.4MiB/s (64.4MB/s)(123MiB/2001msec) 00:11:17.850 slat (usec): min=4, max=127, avg= 7.15, stdev= 3.67 00:11:17.850 clat (usec): min=264, max=13204, avg=4034.03, stdev=1477.84 00:11:17.850 lat (usec): min=269, max=13332, avg=4041.18, stdev=1479.47 00:11:17.850 clat percentiles (usec): 00:11:17.850 | 1.00th=[ 2409], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2704], 00:11:17.850 | 30.00th=[ 3032], 40.00th=[ 3294], 50.00th=[ 3556], 60.00th=[ 3818], 00:11:17.850 | 70.00th=[ 4424], 80.00th=[ 5473], 90.00th=[ 6325], 95.00th=[ 6915], 00:11:17.850 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[ 9765], 99.95th=[11207], 00:11:17.850 | 99.99th=[13042] 00:11:17.850 bw ( KiB/s): min=48304, max=63472, per=89.58%, avg=56316.33, stdev=7620.20, samples=3 00:11:17.850 iops : min=12076, max=15868, avg=14079.00, stdev=1905.04, samples=3 00:11:17.850 write: IOPS=15.7k, BW=61.5MiB/s (64.4MB/s)(123MiB/2001msec); 0 zone resets 00:11:17.850 slat (usec): min=4, max=102, avg= 7.70, stdev= 3.79 00:11:17.850 clat (usec): min=225, max=13077, avg=4069.14, stdev=1489.12 00:11:17.850 lat (usec): min=230, max=13096, avg=4076.84, stdev=1490.73 00:11:17.850 clat percentiles (usec): 00:11:17.850 | 1.00th=[ 2409], 5.00th=[ 2540], 10.00th=[ 2606], 20.00th=[ 2704], 00:11:17.850 | 30.00th=[ 3064], 40.00th=[ 3326], 50.00th=[ 3589], 60.00th=[ 3851], 00:11:17.850 | 70.00th=[ 4490], 80.00th=[ 5538], 90.00th=[ 6390], 95.00th=[ 6980], 00:11:17.850 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[ 9896], 99.95th=[11338], 00:11:17.850 | 99.99th=[12780] 00:11:17.850 bw ( KiB/s): min=48096, max=63264, per=89.43%, avg=56279.00, stdev=7654.64, samples=3 00:11:17.850 iops : min=12024, max=15816, avg=14069.67, stdev=1913.64, samples=3 00:11:17.850 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:11:17.850 lat (msec) : 2=0.23%, 4=64.19%, 10=35.45%, 20=0.09% 00:11:17.850 cpu : usr=98.60%, sys=0.20%, ctx=4, majf=0, minf=609 00:11:17.850 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:17.850 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:17.850 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:17.850 issued rwts: total=31448,31481,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:17.850 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:17.850 00:11:17.850 Run status group 0 (all jobs): 00:11:17.850 READ: bw=61.4MiB/s (64.4MB/s), 61.4MiB/s-61.4MiB/s (64.4MB/s-64.4MB/s), io=123MiB (129MB), run=2001-2001msec 00:11:17.850 WRITE: bw=61.5MiB/s (64.4MB/s), 61.5MiB/s-61.5MiB/s (64.4MB/s-64.4MB/s), io=123MiB (129MB), run=2001-2001msec 00:11:17.850 ----------------------------------------------------- 00:11:17.850 Suppressions used: 00:11:17.850 count bytes template 00:11:17.850 1 32 /usr/src/fio/parse.c 00:11:17.850 1 8 libtcmalloc_minimal.so 00:11:17.850 ----------------------------------------------------- 00:11:17.850 00:11:17.850 16:20:37 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:17.850 16:20:37 -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:11:17.850 16:20:37 -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:17.850 16:20:37 -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:11:17.850 16:20:37 -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:11:17.850 16:20:37 -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:09.0' 00:11:18.111 16:20:37 -- nvme/nvme.sh@41 -- # bs=4096 00:11:18.111 16:20:37 -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:18.111 16:20:37 -- common/autotest_common.sh@1349 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:18.111 16:20:37 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:11:18.111 16:20:37 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:11:18.111 16:20:37 -- common/autotest_common.sh@1328 -- # local sanitizers 00:11:18.111 16:20:37 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:18.111 16:20:37 -- common/autotest_common.sh@1330 -- # shift 00:11:18.111 16:20:37 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:11:18.111 16:20:37 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:11:18.111 16:20:37 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:11:18.111 16:20:37 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:11:18.111 16:20:37 -- common/autotest_common.sh@1334 -- # grep libasan 00:11:18.111 16:20:37 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:11:18.111 16:20:37 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:11:18.112 16:20:37 -- common/autotest_common.sh@1336 -- # break 00:11:18.112 16:20:37 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:11:18.112 16:20:37 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.09.0' --bs=4096 00:11:18.373 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:11:18.373 fio-3.35 00:11:18.373 Starting 1 thread 00:11:24.962 00:11:24.962 test: (groupid=0, jobs=1): err= 0: pid=65236: Sat Nov 9 16:20:44 2024 00:11:24.962 read: IOPS=13.2k, BW=51.4MiB/s (53.9MB/s)(103MiB/2001msec) 00:11:24.962 slat (usec): min=4, max=837, avg= 7.97, stdev= 6.63 00:11:24.962 clat (usec): min=253, max=13165, avg=4833.19, stdev=1608.17 00:11:24.962 lat (usec): min=258, max=13223, avg=4841.17, stdev=1609.54 00:11:24.962 clat percentiles (usec): 00:11:24.962 | 1.00th=[ 2573], 5.00th=[ 2835], 10.00th=[ 3097], 20.00th=[ 3392], 00:11:24.962 | 30.00th=[ 3589], 40.00th=[ 3851], 50.00th=[ 4293], 60.00th=[ 5211], 00:11:24.962 | 70.00th=[ 5866], 80.00th=[ 6456], 90.00th=[ 7111], 95.00th=[ 7570], 00:11:24.962 | 99.00th=[ 8586], 99.50th=[ 9110], 99.90th=[10290], 99.95th=[12256], 00:11:24.962 | 99.99th=[13173] 00:11:24.962 bw ( KiB/s): min=47264, max=52768, per=94.20%, avg=49589.33, stdev=2849.50, samples=3 00:11:24.962 iops : min=11816, max=13192, avg=12397.33, stdev=712.37, samples=3 00:11:24.962 write: IOPS=13.2k, BW=51.4MiB/s (53.9MB/s)(103MiB/2001msec); 0 zone resets 00:11:24.962 slat (usec): min=4, max=975, avg= 8.60, stdev= 7.70 00:11:24.962 clat (usec): min=224, max=13092, avg=4854.90, stdev=1610.82 00:11:24.962 lat (usec): min=230, max=13105, avg=4863.49, stdev=1612.22 00:11:24.962 clat percentiles (usec): 00:11:24.962 | 1.00th=[ 2573], 5.00th=[ 2900], 10.00th=[ 3130], 20.00th=[ 3392], 00:11:24.962 | 30.00th=[ 3589], 40.00th=[ 3851], 50.00th=[ 4359], 60.00th=[ 5211], 00:11:24.962 | 70.00th=[ 5932], 80.00th=[ 6456], 90.00th=[ 7111], 95.00th=[ 7635], 00:11:24.962 | 99.00th=[ 8717], 99.50th=[ 9241], 99.90th=[10421], 99.95th=[12256], 00:11:24.962 | 99.99th=[13042] 00:11:24.962 bw ( KiB/s): min=47768, max=53144, per=94.25%, avg=49626.67, stdev=3047.74, samples=3 00:11:24.962 iops : min=11942, max=13286, avg=12406.67, stdev=761.94, samples=3 00:11:24.962 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.02% 00:11:24.962 lat (msec) : 2=0.17%, 4=43.81%, 10=55.82%, 20=0.13% 00:11:24.962 cpu : usr=97.55%, sys=0.45%, ctx=6, majf=0, minf=606 00:11:24.962 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:11:24.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:24.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:24.962 issued rwts: total=26334,26339,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:24.962 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:24.962 00:11:24.962 Run status group 0 (all jobs): 00:11:24.962 READ: bw=51.4MiB/s (53.9MB/s), 51.4MiB/s-51.4MiB/s (53.9MB/s-53.9MB/s), io=103MiB (108MB), run=2001-2001msec 00:11:24.962 WRITE: bw=51.4MiB/s (53.9MB/s), 51.4MiB/s-51.4MiB/s (53.9MB/s-53.9MB/s), io=103MiB (108MB), run=2001-2001msec 00:11:24.962 ----------------------------------------------------- 00:11:24.962 Suppressions used: 00:11:24.962 count bytes template 00:11:24.962 1 32 /usr/src/fio/parse.c 00:11:24.962 1 8 libtcmalloc_minimal.so 00:11:24.962 ----------------------------------------------------- 00:11:24.962 00:11:24.962 ************************************ 00:11:24.962 END TEST nvme_fio 00:11:24.962 ************************************ 00:11:24.962 16:20:44 -- nvme/nvme.sh@44 -- # ran_fio=true 00:11:24.962 16:20:44 -- nvme/nvme.sh@46 -- # true 00:11:24.962 00:11:24.962 real 0m23.979s 00:11:24.962 user 0m15.624s 00:11:24.962 sys 0m14.050s 00:11:24.962 16:20:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:24.962 16:20:44 -- common/autotest_common.sh@10 -- # set +x 00:11:24.962 ************************************ 00:11:24.962 END TEST nvme 00:11:24.962 ************************************ 00:11:24.962 00:11:24.962 real 1m39.499s 00:11:24.962 user 3m39.677s 00:11:24.962 sys 0m25.117s 00:11:24.962 16:20:44 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:24.962 16:20:44 -- common/autotest_common.sh@10 -- # set +x 00:11:24.962 16:20:44 -- spdk/autotest.sh@210 -- # [[ 0 -eq 1 ]] 00:11:24.962 16:20:44 -- spdk/autotest.sh@214 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:24.962 16:20:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:24.962 16:20:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:24.962 16:20:44 -- common/autotest_common.sh@10 -- # set +x 00:11:24.962 ************************************ 00:11:24.962 START TEST nvme_scc 00:11:24.962 ************************************ 00:11:24.962 16:20:44 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:11:24.962 * Looking for test storage... 00:11:24.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:24.962 16:20:44 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:24.962 16:20:44 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:24.962 16:20:44 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:24.963 16:20:44 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:24.963 16:20:44 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:24.963 16:20:44 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:24.963 16:20:44 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:24.963 16:20:44 -- scripts/common.sh@335 -- # IFS=.-: 00:11:24.963 16:20:44 -- scripts/common.sh@335 -- # read -ra ver1 00:11:24.963 16:20:44 -- scripts/common.sh@336 -- # IFS=.-: 00:11:24.963 16:20:44 -- scripts/common.sh@336 -- # read -ra ver2 00:11:24.963 16:20:44 -- scripts/common.sh@337 -- # local 'op=<' 00:11:24.963 16:20:44 -- scripts/common.sh@339 -- # ver1_l=2 00:11:24.963 16:20:44 -- scripts/common.sh@340 -- # ver2_l=1 00:11:24.963 16:20:44 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:24.963 16:20:44 -- scripts/common.sh@343 -- # case "$op" in 00:11:24.963 16:20:44 -- scripts/common.sh@344 -- # : 1 00:11:24.963 16:20:44 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:24.963 16:20:44 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:24.963 16:20:44 -- scripts/common.sh@364 -- # decimal 1 00:11:24.963 16:20:44 -- scripts/common.sh@352 -- # local d=1 00:11:24.963 16:20:44 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:24.963 16:20:44 -- scripts/common.sh@354 -- # echo 1 00:11:24.963 16:20:44 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:24.963 16:20:44 -- scripts/common.sh@365 -- # decimal 2 00:11:24.963 16:20:44 -- scripts/common.sh@352 -- # local d=2 00:11:24.963 16:20:44 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:24.963 16:20:44 -- scripts/common.sh@354 -- # echo 2 00:11:24.963 16:20:44 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:24.963 16:20:44 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:24.963 16:20:44 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:24.963 16:20:44 -- scripts/common.sh@367 -- # return 0 00:11:24.963 16:20:44 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:24.963 16:20:44 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:24.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.963 --rc genhtml_branch_coverage=1 00:11:24.963 --rc genhtml_function_coverage=1 00:11:24.963 --rc genhtml_legend=1 00:11:24.963 --rc geninfo_all_blocks=1 00:11:24.963 --rc geninfo_unexecuted_blocks=1 00:11:24.963 00:11:24.963 ' 00:11:24.963 16:20:44 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:24.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.963 --rc genhtml_branch_coverage=1 00:11:24.963 --rc genhtml_function_coverage=1 00:11:24.963 --rc genhtml_legend=1 00:11:24.963 --rc geninfo_all_blocks=1 00:11:24.963 --rc geninfo_unexecuted_blocks=1 00:11:24.963 00:11:24.963 ' 00:11:24.963 16:20:44 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:24.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.963 --rc genhtml_branch_coverage=1 00:11:24.963 --rc genhtml_function_coverage=1 00:11:24.963 --rc genhtml_legend=1 00:11:24.963 --rc geninfo_all_blocks=1 00:11:24.963 --rc geninfo_unexecuted_blocks=1 00:11:24.963 00:11:24.963 ' 00:11:24.963 16:20:44 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:24.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:24.963 --rc genhtml_branch_coverage=1 00:11:24.963 --rc genhtml_function_coverage=1 00:11:24.963 --rc genhtml_legend=1 00:11:24.963 --rc geninfo_all_blocks=1 00:11:24.963 --rc geninfo_unexecuted_blocks=1 00:11:24.963 00:11:24.963 ' 00:11:24.963 16:20:44 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:24.963 16:20:44 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:24.963 16:20:44 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:24.963 16:20:44 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:24.963 16:20:44 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:24.963 16:20:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:24.963 16:20:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:24.963 16:20:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:24.963 16:20:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.963 16:20:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.963 16:20:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.963 16:20:44 -- paths/export.sh@5 -- # export PATH 00:11:24.963 16:20:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:24.963 16:20:44 -- nvme/functions.sh@10 -- # ctrls=() 00:11:24.963 16:20:44 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:24.963 16:20:44 -- nvme/functions.sh@11 -- # nvmes=() 00:11:24.963 16:20:44 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:24.963 16:20:44 -- nvme/functions.sh@12 -- # bdfs=() 00:11:24.963 16:20:44 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:24.963 16:20:44 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:24.963 16:20:44 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:24.963 16:20:44 -- nvme/functions.sh@14 -- # nvme_name= 00:11:24.963 16:20:44 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:24.963 16:20:44 -- nvme/nvme_scc.sh@12 -- # uname 00:11:24.963 16:20:44 -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:11:24.963 16:20:44 -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:11:24.963 16:20:44 -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:25.533 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:25.533 Waiting for block devices as requested 00:11:25.533 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.533 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.793 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:25.793 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:31.092 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:31.092 16:20:50 -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:11:31.092 16:20:50 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:31.092 16:20:50 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.092 16:20:50 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:31.092 16:20:50 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:31.092 16:20:50 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:31.092 16:20:50 -- scripts/common.sh@15 -- # local i 00:11:31.092 16:20:50 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:31.092 16:20:50 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:31.092 16:20:50 -- scripts/common.sh@24 -- # return 0 00:11:31.092 16:20:50 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:31.092 16:20:50 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:31.092 16:20:50 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:31.092 16:20:50 -- nvme/functions.sh@18 -- # shift 00:11:31.092 16:20:50 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:31.092 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.092 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.092 16:20:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:31.092 16:20:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.092 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.092 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.092 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.092 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:31.092 16:20:50 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.093 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:31.093 16:20:50 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:31.093 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.094 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.094 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:31.094 16:20:50 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:31.095 16:20:50 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:31.095 16:20:50 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:31.095 16:20:50 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:31.095 16:20:50 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:31.095 16:20:50 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.095 16:20:50 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:31.095 16:20:50 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:31.095 16:20:50 -- scripts/common.sh@15 -- # local i 00:11:31.095 16:20:50 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:31.095 16:20:50 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:31.095 16:20:50 -- scripts/common.sh@24 -- # return 0 00:11:31.095 16:20:50 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:31.095 16:20:50 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:31.095 16:20:50 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@18 -- # shift 00:11:31.095 16:20:50 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.095 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:31.095 16:20:50 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:31.095 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.096 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.096 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.096 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.097 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.097 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:31.097 16:20:50 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:31.098 16:20:50 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.098 16:20:50 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:31.098 16:20:50 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:31.098 16:20:50 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@18 -- # shift 00:11:31.098 16:20:50 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.098 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.098 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:31.098 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:31.099 16:20:50 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.099 16:20:50 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:31.099 16:20:50 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:31.099 16:20:50 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@18 -- # shift 00:11:31.099 16:20:50 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.099 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:31.099 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:31.099 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:31.100 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.100 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.100 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:31.101 16:20:50 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.101 16:20:50 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:31.101 16:20:50 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:31.101 16:20:50 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@18 -- # shift 00:11:31.101 16:20:50 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.101 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.101 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:31.101 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:31.102 16:20:50 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:31.102 16:20:50 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:31.102 16:20:50 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:31.102 16:20:50 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:31.102 16:20:50 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.102 16:20:50 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:31.102 16:20:50 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:31.102 16:20:50 -- scripts/common.sh@15 -- # local i 00:11:31.102 16:20:50 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:31.102 16:20:50 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:31.102 16:20:50 -- scripts/common.sh@24 -- # return 0 00:11:31.102 16:20:50 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:31.102 16:20:50 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:31.102 16:20:50 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@18 -- # shift 00:11:31.102 16:20:50 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.102 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:31.102 16:20:50 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.102 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.103 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.103 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.103 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.104 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.104 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.104 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:31.105 16:20:50 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.105 16:20:50 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:31.105 16:20:50 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:31.105 16:20:50 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@18 -- # shift 00:11:31.105 16:20:50 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.105 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.105 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.105 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:31.106 16:20:50 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.106 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.106 16:20:50 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:31.106 16:20:50 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:31.106 16:20:50 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:31.106 16:20:50 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:31.106 16:20:50 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:31.106 16:20:50 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:31.106 16:20:50 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:31.106 16:20:50 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:31.106 16:20:50 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:31.106 16:20:50 -- scripts/common.sh@15 -- # local i 00:11:31.107 16:20:50 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:31.107 16:20:50 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:31.107 16:20:50 -- scripts/common.sh@24 -- # return 0 00:11:31.107 16:20:50 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:31.107 16:20:50 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:31.107 16:20:50 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@18 -- # shift 00:11:31.107 16:20:50 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.107 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.107 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.107 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.108 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.108 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.108 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.108 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.108 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.108 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.108 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:31.108 16:20:50 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.108 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:31.371 16:20:50 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.371 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.371 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.372 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.372 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.372 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:31.373 16:20:50 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:31.373 16:20:50 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:31.373 16:20:50 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:31.373 16:20:50 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@18 -- # shift 00:11:31.373 16:20:50 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:31.373 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.373 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.373 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:31.374 16:20:50 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # IFS=: 00:11:31.374 16:20:50 -- nvme/functions.sh@21 -- # read -r reg val 00:11:31.374 16:20:50 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:31.374 16:20:50 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:31.374 16:20:50 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:31.374 16:20:50 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:31.374 16:20:50 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:31.374 16:20:50 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:31.374 16:20:50 -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:11:31.374 16:20:50 -- nvme/functions.sh@202 -- # local _ctrls feature=scc 00:11:31.374 16:20:50 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:31.374 16:20:50 -- nvme/functions.sh@204 -- # get_ctrls_with_feature scc 00:11:31.374 16:20:50 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:31.374 16:20:50 -- nvme/functions.sh@192 -- # local ctrl feature=scc 00:11:31.374 16:20:50 -- nvme/functions.sh@194 -- # type -t ctrl_has_scc 00:11:31.374 16:20:50 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:31.374 16:20:50 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme1 00:11:31.374 16:20:50 -- nvme/functions.sh@182 -- # local ctrl=nvme1 oncs 00:11:31.374 16:20:50 -- nvme/functions.sh@184 -- # get_oncs nvme1 00:11:31.374 16:20:50 -- nvme/functions.sh@169 -- # local ctrl=nvme1 00:11:31.374 16:20:50 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme1 oncs 00:11:31.374 16:20:50 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:11:31.374 16:20:50 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:31.374 16:20:50 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:31.374 16:20:50 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:31.374 16:20:50 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:31.374 16:20:50 -- nvme/functions.sh@197 -- # echo nvme1 00:11:31.374 16:20:50 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:31.374 16:20:50 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme0 00:11:31.374 16:20:50 -- nvme/functions.sh@182 -- # local ctrl=nvme0 oncs 00:11:31.374 16:20:50 -- nvme/functions.sh@184 -- # get_oncs nvme0 00:11:31.374 16:20:50 -- nvme/functions.sh@169 -- # local ctrl=nvme0 00:11:31.374 16:20:50 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme0 oncs 00:11:31.374 16:20:50 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:11:31.374 16:20:50 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:31.374 16:20:50 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:31.374 16:20:50 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:31.374 16:20:50 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:31.374 16:20:50 -- nvme/functions.sh@197 -- # echo nvme0 00:11:31.374 16:20:50 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:31.374 16:20:50 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme3 00:11:31.374 16:20:50 -- nvme/functions.sh@182 -- # local ctrl=nvme3 oncs 00:11:31.374 16:20:50 -- nvme/functions.sh@184 -- # get_oncs nvme3 00:11:31.374 16:20:50 -- nvme/functions.sh@169 -- # local ctrl=nvme3 00:11:31.374 16:20:50 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme3 oncs 00:11:31.374 16:20:50 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:11:31.374 16:20:50 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:31.374 16:20:50 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:31.374 16:20:50 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:31.374 16:20:50 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:31.374 16:20:50 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:31.374 16:20:50 -- nvme/functions.sh@197 -- # echo nvme3 00:11:31.374 16:20:50 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:31.374 16:20:50 -- nvme/functions.sh@197 -- # ctrl_has_scc nvme2 00:11:31.374 16:20:50 -- nvme/functions.sh@182 -- # local ctrl=nvme2 oncs 00:11:31.374 16:20:50 -- nvme/functions.sh@184 -- # get_oncs nvme2 00:11:31.375 16:20:50 -- nvme/functions.sh@169 -- # local ctrl=nvme2 00:11:31.375 16:20:50 -- nvme/functions.sh@170 -- # get_nvme_ctrl_feature nvme2 oncs 00:11:31.375 16:20:50 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:11:31.375 16:20:50 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:31.375 16:20:50 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:31.375 16:20:50 -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:11:31.375 16:20:50 -- nvme/functions.sh@76 -- # echo 0x15d 00:11:31.375 16:20:50 -- nvme/functions.sh@184 -- # oncs=0x15d 00:11:31.375 16:20:50 -- nvme/functions.sh@186 -- # (( oncs & 1 << 8 )) 00:11:31.375 16:20:50 -- nvme/functions.sh@197 -- # echo nvme2 00:11:31.375 16:20:50 -- nvme/functions.sh@205 -- # (( 4 > 0 )) 00:11:31.375 16:20:50 -- nvme/functions.sh@206 -- # echo nvme1 00:11:31.375 16:20:50 -- nvme/functions.sh@207 -- # return 0 00:11:31.375 16:20:50 -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:11:31.375 16:20:50 -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:08.0 00:11:31.375 16:20:50 -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:32.318 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:32.318 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.318 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.318 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.579 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:32.579 16:20:52 -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:32.579 16:20:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:32.579 16:20:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:32.579 16:20:52 -- common/autotest_common.sh@10 -- # set +x 00:11:32.579 ************************************ 00:11:32.579 START TEST nvme_simple_copy 00:11:32.579 ************************************ 00:11:32.579 16:20:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:08.0' 00:11:32.845 Initializing NVMe Controllers 00:11:32.845 Attaching to 0000:00:08.0 00:11:32.845 Controller supports SCC. Attached to 0000:00:08.0 00:11:32.845 Namespace ID: 1 size: 4GB 00:11:32.845 Initialization complete. 00:11:32.845 00:11:32.845 Controller QEMU NVMe Ctrl (12342 ) 00:11:32.845 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:11:32.845 Namespace Block Size:4096 00:11:32.845 Writing LBAs 0 to 63 with Random Data 00:11:32.845 Copied LBAs from 0 - 63 to the Destination LBA 256 00:11:32.845 LBAs matching Written Data: 64 00:11:32.845 00:11:32.845 real 0m0.287s 00:11:32.845 user 0m0.102s 00:11:32.845 sys 0m0.082s 00:11:32.845 ************************************ 00:11:32.845 END TEST nvme_simple_copy 00:11:32.845 ************************************ 00:11:32.845 16:20:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:32.845 16:20:52 -- common/autotest_common.sh@10 -- # set +x 00:11:32.845 ************************************ 00:11:32.845 END TEST nvme_scc 00:11:32.845 ************************************ 00:11:32.845 00:11:32.845 real 0m8.062s 00:11:32.845 user 0m1.171s 00:11:32.845 sys 0m1.595s 00:11:32.845 16:20:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:32.845 16:20:52 -- common/autotest_common.sh@10 -- # set +x 00:11:32.845 16:20:52 -- spdk/autotest.sh@216 -- # [[ 0 -eq 1 ]] 00:11:32.845 16:20:52 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:11:32.845 16:20:52 -- spdk/autotest.sh@222 -- # [[ '' -eq 1 ]] 00:11:32.845 16:20:52 -- spdk/autotest.sh@225 -- # [[ 1 -eq 1 ]] 00:11:32.845 16:20:52 -- spdk/autotest.sh@226 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:11:32.845 16:20:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:32.845 16:20:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:32.845 16:20:52 -- common/autotest_common.sh@10 -- # set +x 00:11:32.845 ************************************ 00:11:32.845 START TEST nvme_fdp 00:11:32.845 ************************************ 00:11:32.845 16:20:52 -- common/autotest_common.sh@1114 -- # test/nvme/nvme_fdp.sh 00:11:33.106 * Looking for test storage... 00:11:33.106 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:33.106 16:20:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:33.106 16:20:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:33.106 16:20:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:33.106 16:20:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:33.106 16:20:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:33.106 16:20:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:33.106 16:20:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:33.106 16:20:52 -- scripts/common.sh@335 -- # IFS=.-: 00:11:33.106 16:20:52 -- scripts/common.sh@335 -- # read -ra ver1 00:11:33.106 16:20:52 -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.106 16:20:52 -- scripts/common.sh@336 -- # read -ra ver2 00:11:33.106 16:20:52 -- scripts/common.sh@337 -- # local 'op=<' 00:11:33.106 16:20:52 -- scripts/common.sh@339 -- # ver1_l=2 00:11:33.106 16:20:52 -- scripts/common.sh@340 -- # ver2_l=1 00:11:33.106 16:20:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:33.106 16:20:52 -- scripts/common.sh@343 -- # case "$op" in 00:11:33.106 16:20:52 -- scripts/common.sh@344 -- # : 1 00:11:33.106 16:20:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:33.106 16:20:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.106 16:20:52 -- scripts/common.sh@364 -- # decimal 1 00:11:33.106 16:20:52 -- scripts/common.sh@352 -- # local d=1 00:11:33.106 16:20:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.106 16:20:52 -- scripts/common.sh@354 -- # echo 1 00:11:33.106 16:20:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:33.106 16:20:52 -- scripts/common.sh@365 -- # decimal 2 00:11:33.106 16:20:52 -- scripts/common.sh@352 -- # local d=2 00:11:33.106 16:20:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.106 16:20:52 -- scripts/common.sh@354 -- # echo 2 00:11:33.106 16:20:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:33.106 16:20:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:33.106 16:20:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:33.106 16:20:52 -- scripts/common.sh@367 -- # return 0 00:11:33.106 16:20:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.106 16:20:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:33.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.106 --rc genhtml_branch_coverage=1 00:11:33.106 --rc genhtml_function_coverage=1 00:11:33.106 --rc genhtml_legend=1 00:11:33.106 --rc geninfo_all_blocks=1 00:11:33.106 --rc geninfo_unexecuted_blocks=1 00:11:33.106 00:11:33.106 ' 00:11:33.106 16:20:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:33.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.106 --rc genhtml_branch_coverage=1 00:11:33.106 --rc genhtml_function_coverage=1 00:11:33.106 --rc genhtml_legend=1 00:11:33.106 --rc geninfo_all_blocks=1 00:11:33.106 --rc geninfo_unexecuted_blocks=1 00:11:33.106 00:11:33.106 ' 00:11:33.106 16:20:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:33.106 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.106 --rc genhtml_branch_coverage=1 00:11:33.106 --rc genhtml_function_coverage=1 00:11:33.106 --rc genhtml_legend=1 00:11:33.106 --rc geninfo_all_blocks=1 00:11:33.106 --rc geninfo_unexecuted_blocks=1 00:11:33.107 00:11:33.107 ' 00:11:33.107 16:20:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:33.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.107 --rc genhtml_branch_coverage=1 00:11:33.107 --rc genhtml_function_coverage=1 00:11:33.107 --rc genhtml_legend=1 00:11:33.107 --rc geninfo_all_blocks=1 00:11:33.107 --rc geninfo_unexecuted_blocks=1 00:11:33.107 00:11:33.107 ' 00:11:33.107 16:20:52 -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:33.107 16:20:52 -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:11:33.107 16:20:52 -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:11:33.107 16:20:52 -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:11:33.107 16:20:52 -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:33.107 16:20:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.107 16:20:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.107 16:20:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.107 16:20:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.107 16:20:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.107 16:20:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.107 16:20:52 -- paths/export.sh@5 -- # export PATH 00:11:33.107 16:20:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.107 16:20:52 -- nvme/functions.sh@10 -- # ctrls=() 00:11:33.107 16:20:52 -- nvme/functions.sh@10 -- # declare -A ctrls 00:11:33.107 16:20:52 -- nvme/functions.sh@11 -- # nvmes=() 00:11:33.107 16:20:52 -- nvme/functions.sh@11 -- # declare -A nvmes 00:11:33.107 16:20:52 -- nvme/functions.sh@12 -- # bdfs=() 00:11:33.107 16:20:52 -- nvme/functions.sh@12 -- # declare -A bdfs 00:11:33.107 16:20:52 -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:11:33.107 16:20:52 -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:11:33.107 16:20:52 -- nvme/functions.sh@14 -- # nvme_name= 00:11:33.107 16:20:52 -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:33.107 16:20:52 -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:33.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:33.678 Waiting for block devices as requested 00:11:33.678 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.678 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.976 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:11:33.976 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:11:39.281 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:11:39.281 16:20:58 -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:11:39.281 16:20:58 -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:11:39.281 16:20:58 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:39.281 16:20:58 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:11:39.281 16:20:58 -- nvme/functions.sh@49 -- # pci=0000:00:09.0 00:11:39.281 16:20:58 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:09.0 00:11:39.281 16:20:58 -- scripts/common.sh@15 -- # local i 00:11:39.281 16:20:58 -- scripts/common.sh@18 -- # [[ =~ 0000:00:09.0 ]] 00:11:39.281 16:20:58 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.281 16:20:58 -- scripts/common.sh@24 -- # return 0 00:11:39.281 16:20:58 -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:11:39.281 16:20:58 -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:11:39.281 16:20:58 -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:11:39.281 16:20:58 -- nvme/functions.sh@18 -- # shift 00:11:39.281 16:20:58 -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:11:39.281 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.281 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12343 "' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[sn]='12343 ' 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0x2"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[cmic]=0x2 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x88010"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x88010 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:11:39.282 16:20:58 -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.282 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.282 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="1"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[endgidmax]=1 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.283 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:11:39.283 16:20:58 -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:11:39.283 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:11:39.284 16:20:58 -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:11:39.284 16:20:58 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:11:39.284 16:20:58 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:11:39.284 16:20:58 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:09.0 00:11:39.284 16:20:58 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:11:39.284 16:20:58 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:39.284 16:20:58 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:11:39.284 16:20:58 -- nvme/functions.sh@49 -- # pci=0000:00:08.0 00:11:39.284 16:20:58 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:08.0 00:11:39.284 16:20:58 -- scripts/common.sh@15 -- # local i 00:11:39.284 16:20:58 -- scripts/common.sh@18 -- # [[ =~ 0000:00:08.0 ]] 00:11:39.284 16:20:58 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.284 16:20:58 -- scripts/common.sh@24 -- # return 0 00:11:39.284 16:20:58 -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:11:39.284 16:20:58 -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:11:39.284 16:20:58 -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@18 -- # shift 00:11:39.284 16:20:58 -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.284 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.284 16:20:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:11:39.284 16:20:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12342 "' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[sn]='12342 ' 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.285 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.285 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:11:39.285 16:20:58 -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.286 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.286 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:39.286 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12342"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12342 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.287 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.287 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:11:39.287 16:20:58 -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:11:39.288 16:20:58 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.288 16:20:58 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:11:39.288 16:20:58 -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:11:39.288 16:20:58 -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@18 -- # shift 00:11:39.288 16:20:58 -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x100000"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x100000 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x100000"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x100000 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x100000"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x100000 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x4"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x4 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.288 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:11:39.288 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.288 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:11:39.289 16:20:58 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.289 16:20:58 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n2 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@56 -- # ns_dev=nvme1n2 00:11:39.289 16:20:58 -- nvme/functions.sh@57 -- # nvme_get nvme1n2 id-ns /dev/nvme1n2 00:11:39.289 16:20:58 -- nvme/functions.sh@17 -- # local ref=nvme1n2 reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@18 -- # shift 00:11:39.289 16:20:58 -- nvme/functions.sh@20 -- # local -gA 'nvme1n2=()' 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n2 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsze]="0x100000"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nsze]=0x100000 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[ncap]="0x100000"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[ncap]=0x100000 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nuse]="0x100000"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nuse]=0x100000 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsfeat]="0x14"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nsfeat]=0x14 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nlbaf]="7"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nlbaf]=7 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[flbas]="0x4"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[flbas]=0x4 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mc]="0x3"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[mc]=0x3 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.289 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.289 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dpc]="0x1f"' 00:11:39.289 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[dpc]=0x1f 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dps]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[dps]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nmic]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nmic]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[rescap]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[rescap]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[fpi]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[fpi]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[dlfeat]="1"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[dlfeat]=1 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawun]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nawun]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nawupf]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nawupf]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nacwu]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nacwu]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabsn]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nabsn]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabo]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nabo]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nabspf]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nabspf]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[noiob]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[noiob]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmcap]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nvmcap]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwg]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[npwg]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npwa]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[npwa]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npdg]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[npdg]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[npda]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[npda]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nows]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nows]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mssrl]="128"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[mssrl]=128 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[mcl]="128"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[mcl]=128 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[msrc]="127"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[msrc]=127 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nulbaf]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nulbaf]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[anagrpid]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[anagrpid]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nsattr]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nsattr]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nvmsetid]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nvmsetid]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[endgid]="0"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[endgid]=0 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[nguid]="00000000000000000000000000000000"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[nguid]=00000000000000000000000000000000 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[eui64]="0000000000000000"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[eui64]=0000000000000000 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.290 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:39.290 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:39.290 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n2 00:11:39.291 16:20:58 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.291 16:20:58 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n3 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@56 -- # ns_dev=nvme1n3 00:11:39.291 16:20:58 -- nvme/functions.sh@57 -- # nvme_get nvme1n3 id-ns /dev/nvme1n3 00:11:39.291 16:20:58 -- nvme/functions.sh@17 -- # local ref=nvme1n3 reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@18 -- # shift 00:11:39.291 16:20:58 -- nvme/functions.sh@20 -- # local -gA 'nvme1n3=()' 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n3 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsze]="0x100000"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nsze]=0x100000 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[ncap]="0x100000"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[ncap]=0x100000 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nuse]="0x100000"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nuse]=0x100000 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsfeat]="0x14"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nsfeat]=0x14 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nlbaf]="7"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nlbaf]=7 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[flbas]="0x4"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[flbas]=0x4 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mc]="0x3"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[mc]=0x3 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dpc]="0x1f"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[dpc]=0x1f 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dps]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[dps]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nmic]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nmic]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[rescap]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[rescap]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[fpi]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[fpi]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[dlfeat]="1"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[dlfeat]=1 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawun]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nawun]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nawupf]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nawupf]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nacwu]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nacwu]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabsn]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nabsn]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabo]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nabo]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nabspf]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nabspf]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[noiob]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[noiob]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmcap]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nvmcap]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwg]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[npwg]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npwa]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[npwa]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.291 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npdg]="0"' 00:11:39.291 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[npdg]=0 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.291 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[npda]="0"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[npda]=0 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nows]="0"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nows]=0 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mssrl]="128"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[mssrl]=128 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[mcl]="128"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[mcl]=128 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[msrc]="127"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[msrc]=127 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nulbaf]="0"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nulbaf]=0 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[anagrpid]="0"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[anagrpid]=0 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nsattr]="0"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nsattr]=0 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nvmsetid]="0"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nvmsetid]=0 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[endgid]="0"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[endgid]=0 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[nguid]="00000000000000000000000000000000"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[nguid]=00000000000000000000000000000000 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[eui64]="0000000000000000"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[eui64]=0000000000000000 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme1n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme1n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n3 00:11:39.292 16:20:58 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:11:39.292 16:20:58 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:11:39.292 16:20:58 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:08.0 00:11:39.292 16:20:58 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:11:39.292 16:20:58 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:39.292 16:20:58 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@49 -- # pci=0000:00:06.0 00:11:39.292 16:20:58 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:06.0 00:11:39.292 16:20:58 -- scripts/common.sh@15 -- # local i 00:11:39.292 16:20:58 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:11:39.292 16:20:58 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.292 16:20:58 -- scripts/common.sh@24 -- # return 0 00:11:39.292 16:20:58 -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:11:39.292 16:20:58 -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:11:39.292 16:20:58 -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@18 -- # shift 00:11:39.292 16:20:58 -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:11:39.292 16:20:58 -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.292 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.292 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12340 "' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[sn]='12340 ' 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:11:39.293 16:20:58 -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.293 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.293 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.294 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.294 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:11:39.294 16:20:58 -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12340"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12340 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:11:39.295 16:20:58 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.295 16:20:58 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:11:39.295 16:20:58 -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:11:39.295 16:20:58 -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@18 -- # shift 00:11:39.295 16:20:58 -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x17a17a"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x17a17a 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x17a17a"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x17a17a 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x17a17a"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x17a17a 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x7"' 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x7 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.295 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.295 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.295 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.296 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.296 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.296 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:11:39.297 16:20:58 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:11:39.297 16:20:58 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:11:39.297 16:20:58 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:06.0 00:11:39.297 16:20:58 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:11:39.297 16:20:58 -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:11:39.297 16:20:58 -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@49 -- # pci=0000:00:07.0 00:11:39.297 16:20:58 -- nvme/functions.sh@50 -- # pci_can_use 0000:00:07.0 00:11:39.297 16:20:58 -- scripts/common.sh@15 -- # local i 00:11:39.297 16:20:58 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:11:39.297 16:20:58 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:11:39.297 16:20:58 -- scripts/common.sh@24 -- # return 0 00:11:39.297 16:20:58 -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:11:39.297 16:20:58 -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:11:39.297 16:20:58 -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@18 -- # shift 00:11:39.297 16:20:58 -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12341 "' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme3[sn]='12341 ' 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0"' 00:11:39.297 16:20:58 -- nvme/functions.sh@23 -- # nvme3[cmic]=0 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.297 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.297 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x8000"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x8000 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:58 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:11:39.298 16:20:58 -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:58 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:11:39.298 16:20:59 -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.298 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.298 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[endgidmax]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.299 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:11:39.299 16:20:59 -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:11:39.299 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:12341"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:12341 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n - ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:11:39.300 16:20:59 -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:11:39.300 16:20:59 -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme3/nvme3n1 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@56 -- # ns_dev=nvme3n1 00:11:39.300 16:20:59 -- nvme/functions.sh@57 -- # nvme_get nvme3n1 id-ns /dev/nvme3n1 00:11:39.300 16:20:59 -- nvme/functions.sh@17 -- # local ref=nvme3n1 reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@18 -- # shift 00:11:39.300 16:20:59 -- nvme/functions.sh@20 -- # local -gA 'nvme3n1=()' 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme3n1 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsze]="0x140000"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nsze]=0x140000 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[ncap]="0x140000"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[ncap]=0x140000 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nuse]="0x140000"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nuse]=0x140000 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsfeat]="0x14"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nsfeat]=0x14 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nlbaf]="7"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nlbaf]=7 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[flbas]="0x4"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[flbas]=0x4 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mc]="0x3"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[mc]=0x3 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dpc]="0x1f"' 00:11:39.300 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[dpc]=0x1f 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.300 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.300 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dps]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[dps]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nmic]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nmic]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[rescap]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[rescap]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[fpi]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[fpi]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[dlfeat]="1"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[dlfeat]=1 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawun]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nawun]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nawupf]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nawupf]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nacwu]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nacwu]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabsn]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nabsn]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabo]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nabo]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nabspf]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nabspf]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[noiob]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[noiob]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmcap]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nvmcap]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwg]="0"' 00:11:39.301 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[npwg]=0 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.301 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.301 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npwa]="0"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[npwa]=0 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npdg]="0"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[npdg]=0 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[npda]="0"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[npda]=0 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nows]="0"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nows]=0 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mssrl]="128"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[mssrl]=128 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[mcl]="128"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[mcl]=128 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[msrc]="127"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[msrc]=127 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nulbaf]="0"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nulbaf]=0 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[anagrpid]="0"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[anagrpid]=0 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nsattr]="0"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nsattr]=0 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nvmsetid]="0"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nvmsetid]=0 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[endgid]="0"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[endgid]=0 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[nguid]="00000000000000000000000000000000"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[nguid]=00000000000000000000000000000000 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[eui64]="0000000000000000"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[eui64]=0000000000000000 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # eval 'nvme3n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:11:39.564 16:20:59 -- nvme/functions.sh@23 -- # nvme3n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # IFS=: 00:11:39.564 16:20:59 -- nvme/functions.sh@21 -- # read -r reg val 00:11:39.564 16:20:59 -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme3n1 00:11:39.564 16:20:59 -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:11:39.564 16:20:59 -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:11:39.564 16:20:59 -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:07.0 00:11:39.565 16:20:59 -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:11:39.565 16:20:59 -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:11:39.565 16:20:59 -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:11:39.565 16:20:59 -- nvme/functions.sh@202 -- # local _ctrls feature=fdp 00:11:39.565 16:20:59 -- nvme/functions.sh@204 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:11:39.565 16:20:59 -- nvme/functions.sh@204 -- # get_ctrls_with_feature fdp 00:11:39.565 16:20:59 -- nvme/functions.sh@190 -- # (( 4 == 0 )) 00:11:39.565 16:20:59 -- nvme/functions.sh@192 -- # local ctrl feature=fdp 00:11:39.565 16:20:59 -- nvme/functions.sh@194 -- # type -t ctrl_has_fdp 00:11:39.565 16:20:59 -- nvme/functions.sh@194 -- # [[ function == function ]] 00:11:39.565 16:20:59 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.565 16:20:59 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme1 00:11:39.565 16:20:59 -- nvme/functions.sh@174 -- # local ctrl=nvme1 ctratt 00:11:39.565 16:20:59 -- nvme/functions.sh@176 -- # get_ctratt nvme1 00:11:39.565 16:20:59 -- nvme/functions.sh@164 -- # local ctrl=nvme1 00:11:39.565 16:20:59 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme1 ctratt 00:11:39.565 16:20:59 -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:11:39.565 16:20:59 -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:11:39.565 16:20:59 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:11:39.565 16:20:59 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:39.565 16:20:59 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:39.565 16:20:59 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:39.565 16:20:59 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:39.565 16:20:59 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.565 16:20:59 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme0 00:11:39.565 16:20:59 -- nvme/functions.sh@174 -- # local ctrl=nvme0 ctratt 00:11:39.565 16:20:59 -- nvme/functions.sh@176 -- # get_ctratt nvme0 00:11:39.565 16:20:59 -- nvme/functions.sh@164 -- # local ctrl=nvme0 00:11:39.565 16:20:59 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme0 ctratt 00:11:39.565 16:20:59 -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:11:39.565 16:20:59 -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:11:39.565 16:20:59 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:11:39.565 16:20:59 -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:11:39.565 16:20:59 -- nvme/functions.sh@76 -- # echo 0x88010 00:11:39.565 16:20:59 -- nvme/functions.sh@176 -- # ctratt=0x88010 00:11:39.565 16:20:59 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:39.565 16:20:59 -- nvme/functions.sh@197 -- # echo nvme0 00:11:39.565 16:20:59 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.565 16:20:59 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme3 00:11:39.565 16:20:59 -- nvme/functions.sh@174 -- # local ctrl=nvme3 ctratt 00:11:39.565 16:20:59 -- nvme/functions.sh@176 -- # get_ctratt nvme3 00:11:39.565 16:20:59 -- nvme/functions.sh@164 -- # local ctrl=nvme3 00:11:39.565 16:20:59 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme3 ctratt 00:11:39.565 16:20:59 -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:11:39.565 16:20:59 -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:11:39.565 16:20:59 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:11:39.565 16:20:59 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:39.565 16:20:59 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:39.565 16:20:59 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:39.565 16:20:59 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:39.565 16:20:59 -- nvme/functions.sh@196 -- # for ctrl in "${!ctrls[@]}" 00:11:39.565 16:20:59 -- nvme/functions.sh@197 -- # ctrl_has_fdp nvme2 00:11:39.565 16:20:59 -- nvme/functions.sh@174 -- # local ctrl=nvme2 ctratt 00:11:39.565 16:20:59 -- nvme/functions.sh@176 -- # get_ctratt nvme2 00:11:39.565 16:20:59 -- nvme/functions.sh@164 -- # local ctrl=nvme2 00:11:39.565 16:20:59 -- nvme/functions.sh@165 -- # get_nvme_ctrl_feature nvme2 ctratt 00:11:39.565 16:20:59 -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:11:39.565 16:20:59 -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:11:39.565 16:20:59 -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:11:39.565 16:20:59 -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:11:39.565 16:20:59 -- nvme/functions.sh@76 -- # echo 0x8000 00:11:39.565 16:20:59 -- nvme/functions.sh@176 -- # ctratt=0x8000 00:11:39.565 16:20:59 -- nvme/functions.sh@178 -- # (( ctratt & 1 << 19 )) 00:11:39.565 16:20:59 -- nvme/functions.sh@204 -- # trap - ERR 00:11:39.565 16:20:59 -- nvme/functions.sh@204 -- # print_backtrace 00:11:39.565 16:20:59 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:39.565 16:20:59 -- common/autotest_common.sh@1142 -- # return 0 00:11:39.565 16:20:59 -- nvme/functions.sh@204 -- # trap - ERR 00:11:39.565 16:20:59 -- nvme/functions.sh@204 -- # print_backtrace 00:11:39.565 16:20:59 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:11:39.565 16:20:59 -- common/autotest_common.sh@1142 -- # return 0 00:11:39.565 16:20:59 -- nvme/functions.sh@205 -- # (( 1 > 0 )) 00:11:39.565 16:20:59 -- nvme/functions.sh@206 -- # echo nvme0 00:11:39.565 16:20:59 -- nvme/functions.sh@207 -- # return 0 00:11:39.565 16:20:59 -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme0 00:11:39.565 16:20:59 -- nvme/nvme_fdp.sh@13 -- # bdf=0000:00:09.0 00:11:39.565 16:20:59 -- nvme/nvme_fdp.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:40.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:40.509 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.509 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.509 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.509 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:11:40.770 16:21:00 -- nvme/nvme_fdp.sh@17 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:40.770 16:21:00 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:40.770 16:21:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:40.770 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:11:40.770 ************************************ 00:11:40.770 START TEST nvme_flexible_data_placement 00:11:40.770 ************************************ 00:11:40.770 16:21:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:09.0' 00:11:41.032 Initializing NVMe Controllers 00:11:41.032 Attaching to 0000:00:09.0 00:11:41.032 Controller supports FDP Attached to 0000:00:09.0 00:11:41.032 Namespace ID: 1 Endurance Group ID: 1 00:11:41.032 Initialization complete. 00:11:41.032 00:11:41.032 ================================== 00:11:41.032 == FDP tests for Namespace: #01 == 00:11:41.032 ================================== 00:11:41.032 00:11:41.032 Get Feature: FDP: 00:11:41.032 ================= 00:11:41.032 Enabled: Yes 00:11:41.032 FDP configuration Index: 0 00:11:41.032 00:11:41.032 FDP configurations log page 00:11:41.032 =========================== 00:11:41.032 Number of FDP configurations: 1 00:11:41.032 Version: 0 00:11:41.032 Size: 112 00:11:41.032 FDP Configuration Descriptor: 0 00:11:41.032 Descriptor Size: 96 00:11:41.032 Reclaim Group Identifier format: 2 00:11:41.032 FDP Volatile Write Cache: Not Present 00:11:41.032 FDP Configuration: Valid 00:11:41.032 Vendor Specific Size: 0 00:11:41.032 Number of Reclaim Groups: 2 00:11:41.032 Number of Recalim Unit Handles: 8 00:11:41.032 Max Placement Identifiers: 128 00:11:41.032 Number of Namespaces Suppprted: 256 00:11:41.032 Reclaim unit Nominal Size: 6000000 bytes 00:11:41.032 Estimated Reclaim Unit Time Limit: Not Reported 00:11:41.032 RUH Desc #000: RUH Type: Initially Isolated 00:11:41.032 RUH Desc #001: RUH Type: Initially Isolated 00:11:41.032 RUH Desc #002: RUH Type: Initially Isolated 00:11:41.032 RUH Desc #003: RUH Type: Initially Isolated 00:11:41.032 RUH Desc #004: RUH Type: Initially Isolated 00:11:41.032 RUH Desc #005: RUH Type: Initially Isolated 00:11:41.032 RUH Desc #006: RUH Type: Initially Isolated 00:11:41.032 RUH Desc #007: RUH Type: Initially Isolated 00:11:41.032 00:11:41.032 FDP reclaim unit handle usage log page 00:11:41.032 ====================================== 00:11:41.032 Number of Reclaim Unit Handles: 8 00:11:41.032 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:11:41.032 RUH Usage Desc #001: RUH Attributes: Unused 00:11:41.032 RUH Usage Desc #002: RUH Attributes: Unused 00:11:41.032 RUH Usage Desc #003: RUH Attributes: Unused 00:11:41.032 RUH Usage Desc #004: RUH Attributes: Unused 00:11:41.032 RUH Usage Desc #005: RUH Attributes: Unused 00:11:41.032 RUH Usage Desc #006: RUH Attributes: Unused 00:11:41.032 RUH Usage Desc #007: RUH Attributes: Unused 00:11:41.032 00:11:41.032 FDP statistics log page 00:11:41.032 ======================= 00:11:41.032 Host bytes with metadata written: 935477248 00:11:41.032 Media bytes with metadata written: 935849984 00:11:41.032 Media bytes erased: 0 00:11:41.032 00:11:41.032 FDP Reclaim unit handle status 00:11:41.032 ============================== 00:11:41.032 Number of RUHS descriptors: 2 00:11:41.032 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000043dc 00:11:41.032 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:11:41.032 00:11:41.032 FDP write on placement id: 0 success 00:11:41.032 00:11:41.032 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:11:41.032 00:11:41.032 IO mgmt send: RUH update for Placement ID: #0 Success 00:11:41.032 00:11:41.032 Get Feature: FDP Events for Placement handle: #0 00:11:41.032 ======================== 00:11:41.032 Number of FDP Events: 6 00:11:41.032 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:11:41.032 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:11:41.032 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:11:41.032 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:11:41.032 FDP Event: #4 Type: Media Reallocated Enabled: No 00:11:41.032 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:11:41.032 00:11:41.032 FDP events log page 00:11:41.032 =================== 00:11:41.032 Number of FDP events: 1 00:11:41.032 FDP Event #0: 00:11:41.032 Event Type: RU Not Written to Capacity 00:11:41.032 Placement Identifier: Valid 00:11:41.032 NSID: Valid 00:11:41.032 Location: Valid 00:11:41.032 Placement Identifier: 0 00:11:41.032 Event Timestamp: b 00:11:41.032 Namespace Identifier: 1 00:11:41.032 Reclaim Group Identifier: 0 00:11:41.032 Reclaim Unit Handle Identifier: 0 00:11:41.032 00:11:41.032 FDP test passed 00:11:41.032 00:11:41.032 real 0m0.246s 00:11:41.032 user 0m0.065s 00:11:41.032 sys 0m0.079s 00:11:41.032 16:21:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:41.032 ************************************ 00:11:41.032 END TEST nvme_flexible_data_placement 00:11:41.032 ************************************ 00:11:41.032 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:11:41.032 00:11:41.032 real 0m8.070s 00:11:41.032 user 0m1.161s 00:11:41.032 sys 0m1.634s 00:11:41.032 16:21:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:41.032 ************************************ 00:11:41.032 END TEST nvme_fdp 00:11:41.032 ************************************ 00:11:41.032 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:11:41.032 16:21:00 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:11:41.032 16:21:00 -- spdk/autotest.sh@233 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:41.032 16:21:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:41.032 16:21:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:41.032 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:11:41.032 ************************************ 00:11:41.032 START TEST nvme_rpc 00:11:41.032 ************************************ 00:11:41.032 16:21:00 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:11:41.295 * Looking for test storage... 00:11:41.295 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:41.295 16:21:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:41.295 16:21:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:41.295 16:21:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:41.295 16:21:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:41.295 16:21:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:41.295 16:21:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:41.295 16:21:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:41.295 16:21:00 -- scripts/common.sh@335 -- # IFS=.-: 00:11:41.295 16:21:00 -- scripts/common.sh@335 -- # read -ra ver1 00:11:41.295 16:21:00 -- scripts/common.sh@336 -- # IFS=.-: 00:11:41.295 16:21:00 -- scripts/common.sh@336 -- # read -ra ver2 00:11:41.295 16:21:00 -- scripts/common.sh@337 -- # local 'op=<' 00:11:41.295 16:21:00 -- scripts/common.sh@339 -- # ver1_l=2 00:11:41.295 16:21:00 -- scripts/common.sh@340 -- # ver2_l=1 00:11:41.295 16:21:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:41.295 16:21:00 -- scripts/common.sh@343 -- # case "$op" in 00:11:41.295 16:21:00 -- scripts/common.sh@344 -- # : 1 00:11:41.295 16:21:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:41.295 16:21:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:41.295 16:21:00 -- scripts/common.sh@364 -- # decimal 1 00:11:41.295 16:21:00 -- scripts/common.sh@352 -- # local d=1 00:11:41.295 16:21:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:41.295 16:21:00 -- scripts/common.sh@354 -- # echo 1 00:11:41.295 16:21:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:41.295 16:21:00 -- scripts/common.sh@365 -- # decimal 2 00:11:41.295 16:21:00 -- scripts/common.sh@352 -- # local d=2 00:11:41.295 16:21:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:41.295 16:21:00 -- scripts/common.sh@354 -- # echo 2 00:11:41.295 16:21:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:41.295 16:21:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:41.295 16:21:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:41.295 16:21:00 -- scripts/common.sh@367 -- # return 0 00:11:41.295 16:21:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:41.295 16:21:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:41.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.295 --rc genhtml_branch_coverage=1 00:11:41.295 --rc genhtml_function_coverage=1 00:11:41.295 --rc genhtml_legend=1 00:11:41.295 --rc geninfo_all_blocks=1 00:11:41.295 --rc geninfo_unexecuted_blocks=1 00:11:41.295 00:11:41.295 ' 00:11:41.295 16:21:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:41.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.295 --rc genhtml_branch_coverage=1 00:11:41.295 --rc genhtml_function_coverage=1 00:11:41.295 --rc genhtml_legend=1 00:11:41.295 --rc geninfo_all_blocks=1 00:11:41.295 --rc geninfo_unexecuted_blocks=1 00:11:41.295 00:11:41.295 ' 00:11:41.295 16:21:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:41.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.295 --rc genhtml_branch_coverage=1 00:11:41.295 --rc genhtml_function_coverage=1 00:11:41.295 --rc genhtml_legend=1 00:11:41.295 --rc geninfo_all_blocks=1 00:11:41.295 --rc geninfo_unexecuted_blocks=1 00:11:41.295 00:11:41.295 ' 00:11:41.295 16:21:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:41.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:41.295 --rc genhtml_branch_coverage=1 00:11:41.295 --rc genhtml_function_coverage=1 00:11:41.295 --rc genhtml_legend=1 00:11:41.295 --rc geninfo_all_blocks=1 00:11:41.295 --rc geninfo_unexecuted_blocks=1 00:11:41.295 00:11:41.295 ' 00:11:41.295 16:21:00 -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:41.295 16:21:00 -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:11:41.295 16:21:00 -- common/autotest_common.sh@1519 -- # bdfs=() 00:11:41.295 16:21:00 -- common/autotest_common.sh@1519 -- # local bdfs 00:11:41.295 16:21:00 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:11:41.295 16:21:00 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:11:41.295 16:21:00 -- common/autotest_common.sh@1508 -- # bdfs=() 00:11:41.295 16:21:00 -- common/autotest_common.sh@1508 -- # local bdfs 00:11:41.295 16:21:00 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:11:41.295 16:21:00 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:11:41.295 16:21:00 -- common/autotest_common.sh@1509 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:11:41.295 16:21:00 -- common/autotest_common.sh@1510 -- # (( 4 == 0 )) 00:11:41.295 16:21:00 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 0000:00:08.0 0000:00:09.0 00:11:41.295 16:21:00 -- common/autotest_common.sh@1522 -- # echo 0000:00:06.0 00:11:41.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.295 16:21:00 -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:06.0 00:11:41.295 16:21:00 -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66690 00:11:41.295 16:21:00 -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:11:41.295 16:21:00 -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66690 00:11:41.295 16:21:00 -- common/autotest_common.sh@829 -- # '[' -z 66690 ']' 00:11:41.295 16:21:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.295 16:21:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:41.295 16:21:00 -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:41.295 16:21:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.295 16:21:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:41.295 16:21:00 -- common/autotest_common.sh@10 -- # set +x 00:11:41.295 [2024-11-09 16:21:01.055881] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:41.295 [2024-11-09 16:21:01.056041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66690 ] 00:11:41.557 [2024-11-09 16:21:01.211438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:41.819 [2024-11-09 16:21:01.490046] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:41.819 [2024-11-09 16:21:01.490588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.819 [2024-11-09 16:21:01.490615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:43.206 16:21:02 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:43.206 16:21:02 -- common/autotest_common.sh@862 -- # return 0 00:11:43.206 16:21:02 -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:11:43.206 Nvme0n1 00:11:43.206 16:21:02 -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:11:43.206 16:21:02 -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:11:43.468 request: 00:11:43.468 { 00:11:43.468 "filename": "non_existing_file", 00:11:43.468 "bdev_name": "Nvme0n1", 00:11:43.468 "method": "bdev_nvme_apply_firmware", 00:11:43.468 "req_id": 1 00:11:43.468 } 00:11:43.468 Got JSON-RPC error response 00:11:43.468 response: 00:11:43.468 { 00:11:43.468 "code": -32603, 00:11:43.468 "message": "open file failed." 00:11:43.468 } 00:11:43.468 16:21:03 -- nvme/nvme_rpc.sh@32 -- # rv=1 00:11:43.468 16:21:03 -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:11:43.468 16:21:03 -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:11:43.468 16:21:03 -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:43.468 16:21:03 -- nvme/nvme_rpc.sh@40 -- # killprocess 66690 00:11:43.468 16:21:03 -- common/autotest_common.sh@936 -- # '[' -z 66690 ']' 00:11:43.468 16:21:03 -- common/autotest_common.sh@940 -- # kill -0 66690 00:11:43.730 16:21:03 -- common/autotest_common.sh@941 -- # uname 00:11:43.730 16:21:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:43.730 16:21:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66690 00:11:43.730 killing process with pid 66690 00:11:43.730 16:21:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:43.730 16:21:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:43.730 16:21:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66690' 00:11:43.730 16:21:03 -- common/autotest_common.sh@955 -- # kill 66690 00:11:43.730 16:21:03 -- common/autotest_common.sh@960 -- # wait 66690 00:11:45.116 ************************************ 00:11:45.116 END TEST nvme_rpc 00:11:45.116 ************************************ 00:11:45.116 00:11:45.116 real 0m3.735s 00:11:45.116 user 0m6.794s 00:11:45.116 sys 0m0.744s 00:11:45.116 16:21:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:45.116 16:21:04 -- common/autotest_common.sh@10 -- # set +x 00:11:45.116 16:21:04 -- spdk/autotest.sh@234 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:45.116 16:21:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:45.116 16:21:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:45.116 16:21:04 -- common/autotest_common.sh@10 -- # set +x 00:11:45.116 ************************************ 00:11:45.116 START TEST nvme_rpc_timeouts 00:11:45.116 ************************************ 00:11:45.116 16:21:04 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:11:45.116 * Looking for test storage... 00:11:45.116 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:45.116 16:21:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:45.116 16:21:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:45.116 16:21:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:45.116 16:21:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:45.116 16:21:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:45.116 16:21:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:45.116 16:21:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:45.116 16:21:04 -- scripts/common.sh@335 -- # IFS=.-: 00:11:45.116 16:21:04 -- scripts/common.sh@335 -- # read -ra ver1 00:11:45.116 16:21:04 -- scripts/common.sh@336 -- # IFS=.-: 00:11:45.116 16:21:04 -- scripts/common.sh@336 -- # read -ra ver2 00:11:45.116 16:21:04 -- scripts/common.sh@337 -- # local 'op=<' 00:11:45.116 16:21:04 -- scripts/common.sh@339 -- # ver1_l=2 00:11:45.116 16:21:04 -- scripts/common.sh@340 -- # ver2_l=1 00:11:45.116 16:21:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:45.116 16:21:04 -- scripts/common.sh@343 -- # case "$op" in 00:11:45.116 16:21:04 -- scripts/common.sh@344 -- # : 1 00:11:45.116 16:21:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:45.116 16:21:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:45.116 16:21:04 -- scripts/common.sh@364 -- # decimal 1 00:11:45.116 16:21:04 -- scripts/common.sh@352 -- # local d=1 00:11:45.116 16:21:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:45.116 16:21:04 -- scripts/common.sh@354 -- # echo 1 00:11:45.116 16:21:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:45.116 16:21:04 -- scripts/common.sh@365 -- # decimal 2 00:11:45.116 16:21:04 -- scripts/common.sh@352 -- # local d=2 00:11:45.116 16:21:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:45.116 16:21:04 -- scripts/common.sh@354 -- # echo 2 00:11:45.116 16:21:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:45.116 16:21:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:45.116 16:21:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:45.116 16:21:04 -- scripts/common.sh@367 -- # return 0 00:11:45.116 16:21:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:45.116 16:21:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:45.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.116 --rc genhtml_branch_coverage=1 00:11:45.116 --rc genhtml_function_coverage=1 00:11:45.116 --rc genhtml_legend=1 00:11:45.116 --rc geninfo_all_blocks=1 00:11:45.116 --rc geninfo_unexecuted_blocks=1 00:11:45.116 00:11:45.116 ' 00:11:45.116 16:21:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:45.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.116 --rc genhtml_branch_coverage=1 00:11:45.116 --rc genhtml_function_coverage=1 00:11:45.116 --rc genhtml_legend=1 00:11:45.116 --rc geninfo_all_blocks=1 00:11:45.116 --rc geninfo_unexecuted_blocks=1 00:11:45.116 00:11:45.116 ' 00:11:45.116 16:21:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:45.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.116 --rc genhtml_branch_coverage=1 00:11:45.116 --rc genhtml_function_coverage=1 00:11:45.116 --rc genhtml_legend=1 00:11:45.116 --rc geninfo_all_blocks=1 00:11:45.116 --rc geninfo_unexecuted_blocks=1 00:11:45.116 00:11:45.116 ' 00:11:45.116 16:21:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:45.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:45.116 --rc genhtml_branch_coverage=1 00:11:45.116 --rc genhtml_function_coverage=1 00:11:45.116 --rc genhtml_legend=1 00:11:45.116 --rc geninfo_all_blocks=1 00:11:45.116 --rc geninfo_unexecuted_blocks=1 00:11:45.116 00:11:45.116 ' 00:11:45.116 16:21:04 -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:45.116 16:21:04 -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66761 00:11:45.116 16:21:04 -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66761 00:11:45.116 16:21:04 -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66803 00:11:45.116 16:21:04 -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:11:45.116 16:21:04 -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66803 00:11:45.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.116 16:21:04 -- common/autotest_common.sh@829 -- # '[' -z 66803 ']' 00:11:45.116 16:21:04 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.116 16:21:04 -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:45.116 16:21:04 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.116 16:21:04 -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:45.116 16:21:04 -- common/autotest_common.sh@10 -- # set +x 00:11:45.116 16:21:04 -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:11:45.116 [2024-11-09 16:21:04.773197] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:45.116 [2024-11-09 16:21:04.773367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66803 ] 00:11:45.378 [2024-11-09 16:21:04.932177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:45.378 [2024-11-09 16:21:05.114656] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:45.378 [2024-11-09 16:21:05.115050] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:45.378 [2024-11-09 16:21:05.115328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.766 16:21:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:46.766 Checking default timeout settings: 00:11:46.766 16:21:06 -- common/autotest_common.sh@862 -- # return 0 00:11:46.766 16:21:06 -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:46.766 16:21:06 -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:47.028 Making settings changes with rpc: 00:11:47.028 16:21:06 -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:47.028 16:21:06 -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:47.028 Check default vs. modified settings: 00:11:47.028 16:21:06 -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:47.028 16:21:06 -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66761 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66761 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:47.602 Setting action_on_timeout is changed as expected. 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66761 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66761 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:47.602 Setting timeout_us is changed as expected. 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66761 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66761 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:47.602 Setting timeout_admin_us is changed as expected. 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66761 /tmp/settings_modified_66761 00:11:47.602 16:21:07 -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66803 00:11:47.602 16:21:07 -- common/autotest_common.sh@936 -- # '[' -z 66803 ']' 00:11:47.602 16:21:07 -- common/autotest_common.sh@940 -- # kill -0 66803 00:11:47.602 16:21:07 -- common/autotest_common.sh@941 -- # uname 00:11:47.602 16:21:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:47.602 16:21:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66803 00:11:47.602 16:21:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:47.602 16:21:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:47.602 killing process with pid 66803 00:11:47.602 16:21:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66803' 00:11:47.602 16:21:07 -- common/autotest_common.sh@955 -- # kill 66803 00:11:47.602 16:21:07 -- common/autotest_common.sh@960 -- # wait 66803 00:11:48.983 RPC TIMEOUT SETTING TEST PASSED. 00:11:48.983 16:21:08 -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:48.983 00:11:48.983 real 0m3.862s 00:11:48.983 user 0m7.424s 00:11:48.983 sys 0m0.584s 00:11:48.983 ************************************ 00:11:48.984 END TEST nvme_rpc_timeouts 00:11:48.984 ************************************ 00:11:48.984 16:21:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:11:48.984 16:21:08 -- common/autotest_common.sh@10 -- # set +x 00:11:48.984 16:21:08 -- spdk/autotest.sh@238 -- # '[' 1 -eq 0 ']' 00:11:48.984 16:21:08 -- spdk/autotest.sh@242 -- # [[ 1 -eq 1 ]] 00:11:48.984 16:21:08 -- spdk/autotest.sh@243 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:48.984 16:21:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:48.984 16:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.984 16:21:08 -- common/autotest_common.sh@10 -- # set +x 00:11:48.984 ************************************ 00:11:48.984 START TEST nvme_xnvme 00:11:48.984 ************************************ 00:11:48.984 16:21:08 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:48.984 * Looking for test storage... 00:11:48.984 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:48.984 16:21:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:11:48.984 16:21:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:11:48.984 16:21:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:11:48.984 16:21:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:11:48.984 16:21:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:11:48.984 16:21:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:11:48.984 16:21:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:11:48.984 16:21:08 -- scripts/common.sh@335 -- # IFS=.-: 00:11:48.984 16:21:08 -- scripts/common.sh@335 -- # read -ra ver1 00:11:48.984 16:21:08 -- scripts/common.sh@336 -- # IFS=.-: 00:11:48.984 16:21:08 -- scripts/common.sh@336 -- # read -ra ver2 00:11:48.984 16:21:08 -- scripts/common.sh@337 -- # local 'op=<' 00:11:48.984 16:21:08 -- scripts/common.sh@339 -- # ver1_l=2 00:11:48.984 16:21:08 -- scripts/common.sh@340 -- # ver2_l=1 00:11:48.984 16:21:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:11:48.984 16:21:08 -- scripts/common.sh@343 -- # case "$op" in 00:11:48.984 16:21:08 -- scripts/common.sh@344 -- # : 1 00:11:48.984 16:21:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:11:48.984 16:21:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:48.984 16:21:08 -- scripts/common.sh@364 -- # decimal 1 00:11:48.984 16:21:08 -- scripts/common.sh@352 -- # local d=1 00:11:48.984 16:21:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:48.984 16:21:08 -- scripts/common.sh@354 -- # echo 1 00:11:48.984 16:21:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:11:48.984 16:21:08 -- scripts/common.sh@365 -- # decimal 2 00:11:48.984 16:21:08 -- scripts/common.sh@352 -- # local d=2 00:11:48.984 16:21:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:48.984 16:21:08 -- scripts/common.sh@354 -- # echo 2 00:11:48.984 16:21:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:11:48.984 16:21:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:11:48.984 16:21:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:11:48.984 16:21:08 -- scripts/common.sh@367 -- # return 0 00:11:48.984 16:21:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:48.984 16:21:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:11:48.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.984 --rc genhtml_branch_coverage=1 00:11:48.984 --rc genhtml_function_coverage=1 00:11:48.984 --rc genhtml_legend=1 00:11:48.984 --rc geninfo_all_blocks=1 00:11:48.984 --rc geninfo_unexecuted_blocks=1 00:11:48.984 00:11:48.984 ' 00:11:48.984 16:21:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:11:48.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.984 --rc genhtml_branch_coverage=1 00:11:48.984 --rc genhtml_function_coverage=1 00:11:48.984 --rc genhtml_legend=1 00:11:48.984 --rc geninfo_all_blocks=1 00:11:48.984 --rc geninfo_unexecuted_blocks=1 00:11:48.984 00:11:48.984 ' 00:11:48.984 16:21:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:11:48.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.984 --rc genhtml_branch_coverage=1 00:11:48.984 --rc genhtml_function_coverage=1 00:11:48.984 --rc genhtml_legend=1 00:11:48.984 --rc geninfo_all_blocks=1 00:11:48.984 --rc geninfo_unexecuted_blocks=1 00:11:48.984 00:11:48.984 ' 00:11:48.984 16:21:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:11:48.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:48.984 --rc genhtml_branch_coverage=1 00:11:48.984 --rc genhtml_function_coverage=1 00:11:48.984 --rc genhtml_legend=1 00:11:48.984 --rc geninfo_all_blocks=1 00:11:48.984 --rc geninfo_unexecuted_blocks=1 00:11:48.984 00:11:48.984 ' 00:11:48.984 16:21:08 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:48.984 16:21:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.984 16:21:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.984 16:21:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.984 16:21:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.984 16:21:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.984 16:21:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.984 16:21:08 -- paths/export.sh@5 -- # export PATH 00:11:48.984 16:21:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:11:48.984 16:21:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:11:48.984 16:21:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:48.984 16:21:08 -- common/autotest_common.sh@10 -- # set +x 00:11:48.984 ************************************ 00:11:48.984 START TEST xnvme_to_malloc_dd_copy 00:11:48.984 ************************************ 00:11:48.984 16:21:08 -- common/autotest_common.sh@1114 -- # malloc_to_xnvme_copy 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:11:48.984 16:21:08 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:11:48.984 16:21:08 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:11:48.984 16:21:08 -- dd/common.sh@191 -- # return 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@18 -- # local io 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:11:48.984 16:21:08 -- xnvme/xnvme.sh@42 -- # gen_conf 00:11:48.984 16:21:08 -- dd/common.sh@31 -- # xtrace_disable 00:11:48.984 16:21:08 -- common/autotest_common.sh@10 -- # set +x 00:11:48.984 { 00:11:48.984 "subsystems": [ 00:11:48.984 { 00:11:48.984 "subsystem": "bdev", 00:11:48.984 "config": [ 00:11:48.984 { 00:11:48.984 "params": { 00:11:48.984 "block_size": 512, 00:11:48.984 "num_blocks": 2097152, 00:11:48.984 "name": "malloc0" 00:11:48.984 }, 00:11:48.984 "method": "bdev_malloc_create" 00:11:48.984 }, 00:11:48.984 { 00:11:48.984 "params": { 00:11:48.984 "io_mechanism": "libaio", 00:11:48.984 "filename": "/dev/nullb0", 00:11:48.984 "name": "null0" 00:11:48.984 }, 00:11:48.984 "method": "bdev_xnvme_create" 00:11:48.984 }, 00:11:48.984 { 00:11:48.984 "method": "bdev_wait_for_examine" 00:11:48.984 } 00:11:48.984 ] 00:11:48.984 } 00:11:48.984 ] 00:11:48.984 } 00:11:48.984 [2024-11-09 16:21:08.699506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:48.984 [2024-11-09 16:21:08.699615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66934 ] 00:11:49.245 [2024-11-09 16:21:08.847595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.504 [2024-11-09 16:21:09.106812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.426  [2024-11-09T16:21:12.582Z] Copying: 228/1024 [MB] (228 MBps) [2024-11-09T16:21:13.525Z] Copying: 507/1024 [MB] (278 MBps) [2024-11-09T16:21:14.097Z] Copying: 810/1024 [MB] (303 MBps) [2024-11-09T16:21:16.011Z] Copying: 1024/1024 [MB] (average 276 MBps) 00:11:56.241 00:11:56.503 16:21:16 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:11:56.503 16:21:16 -- xnvme/xnvme.sh@47 -- # gen_conf 00:11:56.503 16:21:16 -- dd/common.sh@31 -- # xtrace_disable 00:11:56.503 16:21:16 -- common/autotest_common.sh@10 -- # set +x 00:11:56.503 { 00:11:56.503 "subsystems": [ 00:11:56.503 { 00:11:56.503 "subsystem": "bdev", 00:11:56.503 "config": [ 00:11:56.503 { 00:11:56.503 "params": { 00:11:56.503 "block_size": 512, 00:11:56.503 "num_blocks": 2097152, 00:11:56.503 "name": "malloc0" 00:11:56.503 }, 00:11:56.503 "method": "bdev_malloc_create" 00:11:56.503 }, 00:11:56.503 { 00:11:56.503 "params": { 00:11:56.503 "io_mechanism": "libaio", 00:11:56.503 "filename": "/dev/nullb0", 00:11:56.503 "name": "null0" 00:11:56.503 }, 00:11:56.503 "method": "bdev_xnvme_create" 00:11:56.503 }, 00:11:56.503 { 00:11:56.503 "method": "bdev_wait_for_examine" 00:11:56.503 } 00:11:56.503 ] 00:11:56.503 } 00:11:56.503 ] 00:11:56.503 } 00:11:56.503 [2024-11-09 16:21:16.097238] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:11:56.503 [2024-11-09 16:21:16.097377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67033 ] 00:11:56.503 [2024-11-09 16:21:16.247049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.764 [2024-11-09 16:21:16.423913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.679  [2024-11-09T16:21:19.393Z] Copying: 309/1024 [MB] (309 MBps) [2024-11-09T16:21:20.333Z] Copying: 619/1024 [MB] (310 MBps) [2024-11-09T16:21:20.593Z] Copying: 928/1024 [MB] (309 MBps) [2024-11-09T16:21:23.140Z] Copying: 1024/1024 [MB] (average 309 MBps) 00:12:03.370 00:12:03.370 16:21:23 -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:03.370 16:21:23 -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:03.370 16:21:23 -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:03.370 16:21:23 -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:03.370 16:21:23 -- dd/common.sh@31 -- # xtrace_disable 00:12:03.370 16:21:23 -- common/autotest_common.sh@10 -- # set +x 00:12:03.370 { 00:12:03.370 "subsystems": [ 00:12:03.370 { 00:12:03.370 "subsystem": "bdev", 00:12:03.370 "config": [ 00:12:03.370 { 00:12:03.370 "params": { 00:12:03.370 "block_size": 512, 00:12:03.370 "num_blocks": 2097152, 00:12:03.370 "name": "malloc0" 00:12:03.370 }, 00:12:03.370 "method": "bdev_malloc_create" 00:12:03.370 }, 00:12:03.370 { 00:12:03.370 "params": { 00:12:03.370 "io_mechanism": "io_uring", 00:12:03.370 "filename": "/dev/nullb0", 00:12:03.370 "name": "null0" 00:12:03.370 }, 00:12:03.370 "method": "bdev_xnvme_create" 00:12:03.370 }, 00:12:03.370 { 00:12:03.370 "method": "bdev_wait_for_examine" 00:12:03.370 } 00:12:03.370 ] 00:12:03.370 } 00:12:03.370 ] 00:12:03.370 } 00:12:03.631 [2024-11-09 16:21:23.140649] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:03.631 [2024-11-09 16:21:23.140799] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67120 ] 00:12:03.631 [2024-11-09 16:21:23.297068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.893 [2024-11-09 16:21:23.517529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.437  [2024-11-09T16:21:26.793Z] Copying: 245/1024 [MB] (245 MBps) [2024-11-09T16:21:27.726Z] Copying: 565/1024 [MB] (320 MBps) [2024-11-09T16:21:28.294Z] Copying: 886/1024 [MB] (320 MBps) [2024-11-09T16:21:30.202Z] Copying: 1024/1024 [MB] (average 298 MBps) 00:12:10.432 00:12:10.432 16:21:30 -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:10.432 16:21:30 -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:10.432 16:21:30 -- dd/common.sh@31 -- # xtrace_disable 00:12:10.432 16:21:30 -- common/autotest_common.sh@10 -- # set +x 00:12:10.432 { 00:12:10.432 "subsystems": [ 00:12:10.432 { 00:12:10.432 "subsystem": "bdev", 00:12:10.432 "config": [ 00:12:10.432 { 00:12:10.432 "params": { 00:12:10.432 "block_size": 512, 00:12:10.432 "num_blocks": 2097152, 00:12:10.432 "name": "malloc0" 00:12:10.432 }, 00:12:10.432 "method": "bdev_malloc_create" 00:12:10.432 }, 00:12:10.432 { 00:12:10.432 "params": { 00:12:10.432 "io_mechanism": "io_uring", 00:12:10.432 "filename": "/dev/nullb0", 00:12:10.432 "name": "null0" 00:12:10.432 }, 00:12:10.432 "method": "bdev_xnvme_create" 00:12:10.432 }, 00:12:10.432 { 00:12:10.432 "method": "bdev_wait_for_examine" 00:12:10.432 } 00:12:10.432 ] 00:12:10.432 } 00:12:10.432 ] 00:12:10.432 } 00:12:10.432 [2024-11-09 16:21:30.103592] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:10.432 [2024-11-09 16:21:30.103708] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67196 ] 00:12:10.690 [2024-11-09 16:21:30.252044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.690 [2024-11-09 16:21:30.391679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.597  [2024-11-09T16:21:33.303Z] Copying: 325/1024 [MB] (325 MBps) [2024-11-09T16:21:34.238Z] Copying: 651/1024 [MB] (325 MBps) [2024-11-09T16:21:34.497Z] Copying: 977/1024 [MB] (326 MBps) [2024-11-09T16:21:36.400Z] Copying: 1024/1024 [MB] (average 325 MBps) 00:12:16.630 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:16.630 16:21:36 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:16.630 00:12:16.630 real 0m27.670s 00:12:16.630 user 0m24.102s 00:12:16.630 sys 0m3.012s 00:12:16.630 16:21:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:16.630 ************************************ 00:12:16.630 END TEST xnvme_to_malloc_dd_copy 00:12:16.630 ************************************ 00:12:16.630 16:21:36 -- common/autotest_common.sh@10 -- # set +x 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:16.630 16:21:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:12:16.630 16:21:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:16.630 16:21:36 -- common/autotest_common.sh@10 -- # set +x 00:12:16.630 ************************************ 00:12:16.630 START TEST xnvme_bdevperf 00:12:16.630 ************************************ 00:12:16.630 16:21:36 -- common/autotest_common.sh@1114 -- # xnvme_bdevperf 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:16.630 16:21:36 -- dd/common.sh@190 -- # [[ -e /sys/module/null_blk ]] 00:12:16.630 16:21:36 -- dd/common.sh@190 -- # modprobe null_blk gb=1 00:12:16.630 16:21:36 -- dd/common.sh@191 -- # return 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@60 -- # local io 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:16.630 16:21:36 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:16.630 16:21:36 -- dd/common.sh@31 -- # xtrace_disable 00:12:16.630 16:21:36 -- common/autotest_common.sh@10 -- # set +x 00:12:16.889 { 00:12:16.889 "subsystems": [ 00:12:16.889 { 00:12:16.889 "subsystem": "bdev", 00:12:16.889 "config": [ 00:12:16.889 { 00:12:16.889 "params": { 00:12:16.889 "io_mechanism": "libaio", 00:12:16.889 "filename": "/dev/nullb0", 00:12:16.889 "name": "null0" 00:12:16.889 }, 00:12:16.889 "method": "bdev_xnvme_create" 00:12:16.889 }, 00:12:16.889 { 00:12:16.889 "method": "bdev_wait_for_examine" 00:12:16.889 } 00:12:16.889 ] 00:12:16.889 } 00:12:16.889 ] 00:12:16.889 } 00:12:16.889 [2024-11-09 16:21:36.444095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:16.889 [2024-11-09 16:21:36.444199] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67301 ] 00:12:16.889 [2024-11-09 16:21:36.591813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.148 [2024-11-09 16:21:36.743989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.406 Running I/O for 5 seconds... 00:12:22.671 00:12:22.672 Latency(us) 00:12:22.672 [2024-11-09T16:21:42.442Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.672 [2024-11-09T16:21:42.442Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:22.672 null0 : 5.00 207910.18 812.15 0.00 0.00 305.84 110.28 1386.34 00:12:22.672 [2024-11-09T16:21:42.442Z] =================================================================================================================== 00:12:22.672 [2024-11-09T16:21:42.442Z] Total : 207910.18 812.15 0.00 0.00 305.84 110.28 1386.34 00:12:22.931 16:21:42 -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:22.931 16:21:42 -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:22.931 16:21:42 -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:22.931 16:21:42 -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:22.931 16:21:42 -- dd/common.sh@31 -- # xtrace_disable 00:12:22.931 16:21:42 -- common/autotest_common.sh@10 -- # set +x 00:12:22.931 { 00:12:22.931 "subsystems": [ 00:12:22.931 { 00:12:22.931 "subsystem": "bdev", 00:12:22.931 "config": [ 00:12:22.931 { 00:12:22.931 "params": { 00:12:22.931 "io_mechanism": "io_uring", 00:12:22.931 "filename": "/dev/nullb0", 00:12:22.931 "name": "null0" 00:12:22.931 }, 00:12:22.931 "method": "bdev_xnvme_create" 00:12:22.931 }, 00:12:22.931 { 00:12:22.931 "method": "bdev_wait_for_examine" 00:12:22.931 } 00:12:22.931 ] 00:12:22.931 } 00:12:22.931 ] 00:12:22.931 } 00:12:22.931 [2024-11-09 16:21:42.639724] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:22.931 [2024-11-09 16:21:42.639834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67375 ] 00:12:23.190 [2024-11-09 16:21:42.787902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.190 [2024-11-09 16:21:42.926628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.450 Running I/O for 5 seconds... 00:12:28.720 00:12:28.720 Latency(us) 00:12:28.720 [2024-11-09T16:21:48.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:28.720 [2024-11-09T16:21:48.490Z] Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:28.720 null0 : 5.00 239425.56 935.26 0.00 0.00 265.11 150.45 340.28 00:12:28.720 [2024-11-09T16:21:48.490Z] =================================================================================================================== 00:12:28.720 [2024-11-09T16:21:48.490Z] Total : 239425.56 935.26 0.00 0.00 265.11 150.45 340.28 00:12:28.981 16:21:48 -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:12:28.981 16:21:48 -- dd/common.sh@195 -- # modprobe -r null_blk 00:12:29.242 00:12:29.242 real 0m12.409s 00:12:29.242 user 0m9.996s 00:12:29.242 sys 0m2.184s 00:12:29.242 ************************************ 00:12:29.242 END TEST xnvme_bdevperf 00:12:29.242 16:21:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:29.242 16:21:48 -- common/autotest_common.sh@10 -- # set +x 00:12:29.242 ************************************ 00:12:29.242 00:12:29.242 real 0m40.348s 00:12:29.242 user 0m34.195s 00:12:29.242 sys 0m5.330s 00:12:29.242 16:21:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:29.242 16:21:48 -- common/autotest_common.sh@10 -- # set +x 00:12:29.242 ************************************ 00:12:29.242 END TEST nvme_xnvme 00:12:29.242 ************************************ 00:12:29.242 16:21:48 -- spdk/autotest.sh@244 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:29.242 16:21:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:29.242 16:21:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:29.242 16:21:48 -- common/autotest_common.sh@10 -- # set +x 00:12:29.242 ************************************ 00:12:29.242 START TEST blockdev_xnvme 00:12:29.242 ************************************ 00:12:29.242 16:21:48 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:12:29.242 * Looking for test storage... 00:12:29.242 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:12:29.242 16:21:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:12:29.242 16:21:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:12:29.242 16:21:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:12:29.501 16:21:49 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:12:29.501 16:21:49 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:12:29.501 16:21:49 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:29.501 16:21:49 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:29.501 16:21:49 -- scripts/common.sh@335 -- # IFS=.-: 00:12:29.501 16:21:49 -- scripts/common.sh@335 -- # read -ra ver1 00:12:29.501 16:21:49 -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.501 16:21:49 -- scripts/common.sh@336 -- # read -ra ver2 00:12:29.501 16:21:49 -- scripts/common.sh@337 -- # local 'op=<' 00:12:29.501 16:21:49 -- scripts/common.sh@339 -- # ver1_l=2 00:12:29.501 16:21:49 -- scripts/common.sh@340 -- # ver2_l=1 00:12:29.501 16:21:49 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:29.501 16:21:49 -- scripts/common.sh@343 -- # case "$op" in 00:12:29.501 16:21:49 -- scripts/common.sh@344 -- # : 1 00:12:29.501 16:21:49 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:29.501 16:21:49 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.501 16:21:49 -- scripts/common.sh@364 -- # decimal 1 00:12:29.501 16:21:49 -- scripts/common.sh@352 -- # local d=1 00:12:29.501 16:21:49 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.501 16:21:49 -- scripts/common.sh@354 -- # echo 1 00:12:29.501 16:21:49 -- scripts/common.sh@364 -- # ver1[v]=1 00:12:29.501 16:21:49 -- scripts/common.sh@365 -- # decimal 2 00:12:29.501 16:21:49 -- scripts/common.sh@352 -- # local d=2 00:12:29.501 16:21:49 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.501 16:21:49 -- scripts/common.sh@354 -- # echo 2 00:12:29.501 16:21:49 -- scripts/common.sh@365 -- # ver2[v]=2 00:12:29.501 16:21:49 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:29.501 16:21:49 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:29.501 16:21:49 -- scripts/common.sh@367 -- # return 0 00:12:29.501 16:21:49 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.501 16:21:49 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:12:29.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.501 --rc genhtml_branch_coverage=1 00:12:29.501 --rc genhtml_function_coverage=1 00:12:29.501 --rc genhtml_legend=1 00:12:29.501 --rc geninfo_all_blocks=1 00:12:29.502 --rc geninfo_unexecuted_blocks=1 00:12:29.502 00:12:29.502 ' 00:12:29.502 16:21:49 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:12:29.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.502 --rc genhtml_branch_coverage=1 00:12:29.502 --rc genhtml_function_coverage=1 00:12:29.502 --rc genhtml_legend=1 00:12:29.502 --rc geninfo_all_blocks=1 00:12:29.502 --rc geninfo_unexecuted_blocks=1 00:12:29.502 00:12:29.502 ' 00:12:29.502 16:21:49 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:12:29.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.502 --rc genhtml_branch_coverage=1 00:12:29.502 --rc genhtml_function_coverage=1 00:12:29.502 --rc genhtml_legend=1 00:12:29.502 --rc geninfo_all_blocks=1 00:12:29.502 --rc geninfo_unexecuted_blocks=1 00:12:29.502 00:12:29.502 ' 00:12:29.502 16:21:49 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:12:29.502 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.502 --rc genhtml_branch_coverage=1 00:12:29.502 --rc genhtml_function_coverage=1 00:12:29.502 --rc genhtml_legend=1 00:12:29.502 --rc geninfo_all_blocks=1 00:12:29.502 --rc geninfo_unexecuted_blocks=1 00:12:29.502 00:12:29.502 ' 00:12:29.502 16:21:49 -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:12:29.502 16:21:49 -- bdev/nbd_common.sh@6 -- # set -e 00:12:29.502 16:21:49 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:12:29.502 16:21:49 -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:29.502 16:21:49 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:12:29.502 16:21:49 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:12:29.502 16:21:49 -- bdev/blockdev.sh@18 -- # : 00:12:29.502 16:21:49 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:12:29.502 16:21:49 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:12:29.502 16:21:49 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:12:29.502 16:21:49 -- bdev/blockdev.sh@672 -- # uname -s 00:12:29.502 16:21:49 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:12:29.502 16:21:49 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:12:29.502 16:21:49 -- bdev/blockdev.sh@680 -- # test_type=xnvme 00:12:29.502 16:21:49 -- bdev/blockdev.sh@681 -- # crypto_device= 00:12:29.502 16:21:49 -- bdev/blockdev.sh@682 -- # dek= 00:12:29.502 16:21:49 -- bdev/blockdev.sh@683 -- # env_ctx= 00:12:29.502 16:21:49 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:12:29.502 16:21:49 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:12:29.502 16:21:49 -- bdev/blockdev.sh@688 -- # [[ xnvme == bdev ]] 00:12:29.502 16:21:49 -- bdev/blockdev.sh@688 -- # [[ xnvme == crypto_* ]] 00:12:29.502 16:21:49 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:12:29.502 16:21:49 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=67516 00:12:29.502 16:21:49 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:12:29.502 16:21:49 -- bdev/blockdev.sh@47 -- # waitforlisten 67516 00:12:29.502 16:21:49 -- common/autotest_common.sh@829 -- # '[' -z 67516 ']' 00:12:29.502 16:21:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:29.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:29.502 16:21:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.502 16:21:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:29.502 16:21:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.502 16:21:49 -- common/autotest_common.sh@10 -- # set +x 00:12:29.502 16:21:49 -- bdev/blockdev.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:12:29.502 [2024-11-09 16:21:49.123954] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:29.502 [2024-11-09 16:21:49.124099] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67516 ] 00:12:29.761 [2024-11-09 16:21:49.276415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.761 [2024-11-09 16:21:49.428667] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:29.761 [2024-11-09 16:21:49.428833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:30.326 16:21:49 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.326 16:21:49 -- common/autotest_common.sh@862 -- # return 0 00:12:30.326 16:21:49 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:12:30.326 16:21:49 -- bdev/blockdev.sh@727 -- # setup_xnvme_conf 00:12:30.326 16:21:49 -- bdev/blockdev.sh@86 -- # local io_mechanism=io_uring 00:12:30.326 16:21:49 -- bdev/blockdev.sh@87 -- # local nvme nvmes 00:12:30.326 16:21:49 -- bdev/blockdev.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:12:30.584 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:30.842 Waiting for block devices as requested 00:12:30.842 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:12:30.842 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:12:30.842 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:12:31.103 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:12:36.398 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:12:36.398 16:21:55 -- bdev/blockdev.sh@90 -- # get_zoned_devs 00:12:36.398 16:21:55 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:12:36.398 16:21:55 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:12:36.398 16:21:55 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:12:36.398 16:21:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:36.398 16:21:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0c0n1 00:12:36.398 16:21:55 -- common/autotest_common.sh@1657 -- # local device=nvme0c0n1 00:12:36.398 16:21:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0c0n1/queue/zoned ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:36.398 16:21:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:12:36.398 16:21:55 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:12:36.398 16:21:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:36.398 16:21:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n1 00:12:36.398 16:21:55 -- common/autotest_common.sh@1657 -- # local device=nvme1n1 00:12:36.398 16:21:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:36.398 16:21:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n2 00:12:36.398 16:21:55 -- common/autotest_common.sh@1657 -- # local device=nvme1n2 00:12:36.398 16:21:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:36.398 16:21:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme1n3 00:12:36.398 16:21:55 -- common/autotest_common.sh@1657 -- # local device=nvme1n3 00:12:36.398 16:21:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:36.398 16:21:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme2n1 00:12:36.398 16:21:55 -- common/autotest_common.sh@1657 -- # local device=nvme2n1 00:12:36.398 16:21:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:12:36.398 16:21:55 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme3n1 00:12:36.398 16:21:55 -- common/autotest_common.sh@1657 -- # local device=nvme3n1 00:12:36.398 16:21:55 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:12:36.398 16:21:55 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:36.398 16:21:55 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme0n1 ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:36.398 16:21:55 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:36.398 16:21:55 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n1 ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:36.398 16:21:55 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:36.398 16:21:55 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n2 ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:36.398 16:21:55 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:36.398 16:21:55 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme1n3 ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:36.398 16:21:55 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:36.398 16:21:55 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme2n1 ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:36.398 16:21:55 -- bdev/blockdev.sh@92 -- # for nvme in /dev/nvme*n* 00:12:36.398 16:21:55 -- bdev/blockdev.sh@93 -- # [[ -b /dev/nvme3n1 ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@93 -- # [[ -z '' ]] 00:12:36.398 16:21:55 -- bdev/blockdev.sh@94 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:12:36.399 16:21:55 -- bdev/blockdev.sh@97 -- # (( 6 > 0 )) 00:12:36.399 16:21:55 -- bdev/blockdev.sh@98 -- # rpc_cmd 00:12:36.399 16:21:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.399 16:21:55 -- common/autotest_common.sh@10 -- # set +x 00:12:36.399 16:21:55 -- bdev/blockdev.sh@98 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme1n2 nvme1n2 io_uring' 'bdev_xnvme_create /dev/nvme1n3 nvme1n3 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:12:36.399 nvme0n1 00:12:36.399 nvme1n1 00:12:36.399 nvme1n2 00:12:36.399 nvme1n3 00:12:36.399 nvme2n1 00:12:36.399 nvme3n1 00:12:36.399 16:21:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.399 16:21:55 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:12:36.399 16:21:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.399 16:21:55 -- common/autotest_common.sh@10 -- # set +x 00:12:36.399 16:21:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.399 16:21:55 -- bdev/blockdev.sh@738 -- # cat 00:12:36.399 16:21:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:12:36.399 16:21:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.399 16:21:55 -- common/autotest_common.sh@10 -- # set +x 00:12:36.399 16:21:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.399 16:21:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:12:36.399 16:21:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.399 16:21:55 -- common/autotest_common.sh@10 -- # set +x 00:12:36.399 16:21:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.399 16:21:55 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:12:36.399 16:21:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.399 16:21:55 -- common/autotest_common.sh@10 -- # set +x 00:12:36.399 16:21:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.399 16:21:55 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:12:36.399 16:21:55 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:12:36.399 16:21:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.399 16:21:55 -- common/autotest_common.sh@10 -- # set +x 00:12:36.399 16:21:55 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:12:36.399 16:21:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.399 16:21:55 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:12:36.399 16:21:55 -- bdev/blockdev.sh@747 -- # jq -r .name 00:12:36.399 16:21:55 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "515e459e-5e9e-450f-ae23-9809e2ab6e3b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "515e459e-5e9e-450f-ae23-9809e2ab6e3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e5d7a331-d11d-4625-be7c-a8b6956295cb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e5d7a331-d11d-4625-be7c-a8b6956295cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "02adc8be-619c-488b-aa43-d5810a8d3a3b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "02adc8be-619c-488b-aa43-d5810a8d3a3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "a99aa3c8-a15c-4151-bfbc-632619ca8e70"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a99aa3c8-a15c-4151-bfbc-632619ca8e70",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "65d3d30e-acb8-4241-9924-eaaebc7c1f35"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "65d3d30e-acb8-4241-9924-eaaebc7c1f35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "fefba926-323f-48be-91b7-3df3a4119f70"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fefba926-323f-48be-91b7-3df3a4119f70",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:12:36.399 16:21:55 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:12:36.399 16:21:55 -- bdev/blockdev.sh@750 -- # hello_world_bdev=nvme0n1 00:12:36.399 16:21:55 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:12:36.399 16:21:55 -- bdev/blockdev.sh@752 -- # killprocess 67516 00:12:36.399 16:21:55 -- common/autotest_common.sh@936 -- # '[' -z 67516 ']' 00:12:36.399 16:21:55 -- common/autotest_common.sh@940 -- # kill -0 67516 00:12:36.399 16:21:55 -- common/autotest_common.sh@941 -- # uname 00:12:36.399 16:21:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:36.399 16:21:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67516 00:12:36.399 16:21:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:36.399 killing process with pid 67516 00:12:36.399 16:21:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:36.399 16:21:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67516' 00:12:36.399 16:21:55 -- common/autotest_common.sh@955 -- # kill 67516 00:12:36.399 16:21:55 -- common/autotest_common.sh@960 -- # wait 67516 00:12:37.785 16:21:57 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:12:37.785 16:21:57 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:37.786 16:21:57 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:12:37.786 16:21:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:37.786 16:21:57 -- common/autotest_common.sh@10 -- # set +x 00:12:37.786 ************************************ 00:12:37.786 START TEST bdev_hello_world 00:12:37.786 ************************************ 00:12:37.786 16:21:57 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:12:37.786 [2024-11-09 16:21:57.510109] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:37.786 [2024-11-09 16:21:57.510219] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67896 ] 00:12:38.047 [2024-11-09 16:21:57.656504] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:38.307 [2024-11-09 16:21:57.822240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.569 [2024-11-09 16:21:58.125881] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:12:38.569 [2024-11-09 16:21:58.125929] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:12:38.569 [2024-11-09 16:21:58.125942] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:12:38.569 [2024-11-09 16:21:58.127493] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:12:38.569 [2024-11-09 16:21:58.127812] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:12:38.569 [2024-11-09 16:21:58.127831] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:12:38.569 [2024-11-09 16:21:58.128052] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:12:38.569 00:12:38.569 [2024-11-09 16:21:58.128084] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:12:39.141 00:12:39.141 real 0m1.341s 00:12:39.141 user 0m1.037s 00:12:39.141 sys 0m0.192s 00:12:39.141 16:21:58 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:39.141 ************************************ 00:12:39.141 END TEST bdev_hello_world 00:12:39.141 ************************************ 00:12:39.141 16:21:58 -- common/autotest_common.sh@10 -- # set +x 00:12:39.141 16:21:58 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:12:39.141 16:21:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:39.141 16:21:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:39.141 16:21:58 -- common/autotest_common.sh@10 -- # set +x 00:12:39.141 ************************************ 00:12:39.141 START TEST bdev_bounds 00:12:39.141 ************************************ 00:12:39.141 16:21:58 -- common/autotest_common.sh@1114 -- # bdev_bounds '' 00:12:39.141 16:21:58 -- bdev/blockdev.sh@288 -- # bdevio_pid=67933 00:12:39.141 16:21:58 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:12:39.141 Process bdevio pid: 67933 00:12:39.141 16:21:58 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 67933' 00:12:39.141 16:21:58 -- bdev/blockdev.sh@291 -- # waitforlisten 67933 00:12:39.141 16:21:58 -- common/autotest_common.sh@829 -- # '[' -z 67933 ']' 00:12:39.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.141 16:21:58 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.141 16:21:58 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.141 16:21:58 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.141 16:21:58 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.141 16:21:58 -- common/autotest_common.sh@10 -- # set +x 00:12:39.141 16:21:58 -- bdev/blockdev.sh@287 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:39.141 [2024-11-09 16:21:58.904220] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:39.141 [2024-11-09 16:21:58.904331] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67933 ] 00:12:39.402 [2024-11-09 16:21:59.048374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:39.663 [2024-11-09 16:21:59.218863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.663 [2024-11-09 16:21:59.219140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.663 [2024-11-09 16:21:59.219159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.236 16:21:59 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:40.236 16:21:59 -- common/autotest_common.sh@862 -- # return 0 00:12:40.236 16:21:59 -- bdev/blockdev.sh@292 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:12:40.236 I/O targets: 00:12:40.236 nvme0n1: 262144 blocks of 4096 bytes (1024 MiB) 00:12:40.236 nvme1n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:40.236 nvme1n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:40.236 nvme1n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:12:40.236 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:12:40.236 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:12:40.236 00:12:40.236 00:12:40.236 CUnit - A unit testing framework for C - Version 2.1-3 00:12:40.236 http://cunit.sourceforge.net/ 00:12:40.236 00:12:40.236 00:12:40.236 Suite: bdevio tests on: nvme3n1 00:12:40.236 Test: blockdev write read block ...passed 00:12:40.236 Test: blockdev write zeroes read block ...passed 00:12:40.236 Test: blockdev write zeroes read no split ...passed 00:12:40.236 Test: blockdev write zeroes read split ...passed 00:12:40.236 Test: blockdev write zeroes read split partial ...passed 00:12:40.236 Test: blockdev reset ...passed 00:12:40.236 Test: blockdev write read 8 blocks ...passed 00:12:40.236 Test: blockdev write read size > 128k ...passed 00:12:40.236 Test: blockdev write read invalid size ...passed 00:12:40.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.236 Test: blockdev write read max offset ...passed 00:12:40.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.236 Test: blockdev writev readv 8 blocks ...passed 00:12:40.236 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.236 Test: blockdev writev readv block ...passed 00:12:40.236 Test: blockdev writev readv size > 128k ...passed 00:12:40.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.236 Test: blockdev comparev and writev ...passed 00:12:40.236 Test: blockdev nvme passthru rw ...passed 00:12:40.236 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.236 Test: blockdev nvme admin passthru ...passed 00:12:40.236 Test: blockdev copy ...passed 00:12:40.236 Suite: bdevio tests on: nvme2n1 00:12:40.236 Test: blockdev write read block ...passed 00:12:40.236 Test: blockdev write zeroes read block ...passed 00:12:40.236 Test: blockdev write zeroes read no split ...passed 00:12:40.236 Test: blockdev write zeroes read split ...passed 00:12:40.236 Test: blockdev write zeroes read split partial ...passed 00:12:40.236 Test: blockdev reset ...passed 00:12:40.236 Test: blockdev write read 8 blocks ...passed 00:12:40.236 Test: blockdev write read size > 128k ...passed 00:12:40.236 Test: blockdev write read invalid size ...passed 00:12:40.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.236 Test: blockdev write read max offset ...passed 00:12:40.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.236 Test: blockdev writev readv 8 blocks ...passed 00:12:40.236 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.236 Test: blockdev writev readv block ...passed 00:12:40.236 Test: blockdev writev readv size > 128k ...passed 00:12:40.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.236 Test: blockdev comparev and writev ...passed 00:12:40.236 Test: blockdev nvme passthru rw ...passed 00:12:40.236 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.236 Test: blockdev nvme admin passthru ...passed 00:12:40.236 Test: blockdev copy ...passed 00:12:40.236 Suite: bdevio tests on: nvme1n3 00:12:40.236 Test: blockdev write read block ...passed 00:12:40.236 Test: blockdev write zeroes read block ...passed 00:12:40.236 Test: blockdev write zeroes read no split ...passed 00:12:40.236 Test: blockdev write zeroes read split ...passed 00:12:40.236 Test: blockdev write zeroes read split partial ...passed 00:12:40.236 Test: blockdev reset ...passed 00:12:40.236 Test: blockdev write read 8 blocks ...passed 00:12:40.236 Test: blockdev write read size > 128k ...passed 00:12:40.236 Test: blockdev write read invalid size ...passed 00:12:40.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.236 Test: blockdev write read max offset ...passed 00:12:40.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.236 Test: blockdev writev readv 8 blocks ...passed 00:12:40.236 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.236 Test: blockdev writev readv block ...passed 00:12:40.236 Test: blockdev writev readv size > 128k ...passed 00:12:40.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.236 Test: blockdev comparev and writev ...passed 00:12:40.236 Test: blockdev nvme passthru rw ...passed 00:12:40.236 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.236 Test: blockdev nvme admin passthru ...passed 00:12:40.236 Test: blockdev copy ...passed 00:12:40.236 Suite: bdevio tests on: nvme1n2 00:12:40.236 Test: blockdev write read block ...passed 00:12:40.236 Test: blockdev write zeroes read block ...passed 00:12:40.236 Test: blockdev write zeroes read no split ...passed 00:12:40.497 Test: blockdev write zeroes read split ...passed 00:12:40.497 Test: blockdev write zeroes read split partial ...passed 00:12:40.497 Test: blockdev reset ...passed 00:12:40.497 Test: blockdev write read 8 blocks ...passed 00:12:40.497 Test: blockdev write read size > 128k ...passed 00:12:40.497 Test: blockdev write read invalid size ...passed 00:12:40.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.497 Test: blockdev write read max offset ...passed 00:12:40.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.497 Test: blockdev writev readv 8 blocks ...passed 00:12:40.497 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.497 Test: blockdev writev readv block ...passed 00:12:40.497 Test: blockdev writev readv size > 128k ...passed 00:12:40.497 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.497 Test: blockdev comparev and writev ...passed 00:12:40.497 Test: blockdev nvme passthru rw ...passed 00:12:40.497 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.497 Test: blockdev nvme admin passthru ...passed 00:12:40.497 Test: blockdev copy ...passed 00:12:40.497 Suite: bdevio tests on: nvme1n1 00:12:40.497 Test: blockdev write read block ...passed 00:12:40.497 Test: blockdev write zeroes read block ...passed 00:12:40.497 Test: blockdev write zeroes read no split ...passed 00:12:40.497 Test: blockdev write zeroes read split ...passed 00:12:40.497 Test: blockdev write zeroes read split partial ...passed 00:12:40.497 Test: blockdev reset ...passed 00:12:40.497 Test: blockdev write read 8 blocks ...passed 00:12:40.497 Test: blockdev write read size > 128k ...passed 00:12:40.497 Test: blockdev write read invalid size ...passed 00:12:40.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.497 Test: blockdev write read max offset ...passed 00:12:40.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.497 Test: blockdev writev readv 8 blocks ...passed 00:12:40.497 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.497 Test: blockdev writev readv block ...passed 00:12:40.497 Test: blockdev writev readv size > 128k ...passed 00:12:40.497 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.497 Test: blockdev comparev and writev ...passed 00:12:40.497 Test: blockdev nvme passthru rw ...passed 00:12:40.497 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.497 Test: blockdev nvme admin passthru ...passed 00:12:40.497 Test: blockdev copy ...passed 00:12:40.497 Suite: bdevio tests on: nvme0n1 00:12:40.497 Test: blockdev write read block ...passed 00:12:40.497 Test: blockdev write zeroes read block ...passed 00:12:40.497 Test: blockdev write zeroes read no split ...passed 00:12:40.497 Test: blockdev write zeroes read split ...passed 00:12:40.497 Test: blockdev write zeroes read split partial ...passed 00:12:40.497 Test: blockdev reset ...passed 00:12:40.497 Test: blockdev write read 8 blocks ...passed 00:12:40.497 Test: blockdev write read size > 128k ...passed 00:12:40.497 Test: blockdev write read invalid size ...passed 00:12:40.497 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:40.497 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:40.497 Test: blockdev write read max offset ...passed 00:12:40.497 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:40.497 Test: blockdev writev readv 8 blocks ...passed 00:12:40.497 Test: blockdev writev readv 30 x 1block ...passed 00:12:40.497 Test: blockdev writev readv block ...passed 00:12:40.497 Test: blockdev writev readv size > 128k ...passed 00:12:40.497 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:40.497 Test: blockdev comparev and writev ...passed 00:12:40.497 Test: blockdev nvme passthru rw ...passed 00:12:40.497 Test: blockdev nvme passthru vendor specific ...passed 00:12:40.497 Test: blockdev nvme admin passthru ...passed 00:12:40.497 Test: blockdev copy ...passed 00:12:40.497 00:12:40.497 Run Summary: Type Total Ran Passed Failed Inactive 00:12:40.497 suites 6 6 n/a 0 0 00:12:40.497 tests 138 138 138 0 0 00:12:40.497 asserts 780 780 780 0 n/a 00:12:40.497 00:12:40.497 Elapsed time = 0.939 seconds 00:12:40.497 0 00:12:40.497 16:22:00 -- bdev/blockdev.sh@293 -- # killprocess 67933 00:12:40.497 16:22:00 -- common/autotest_common.sh@936 -- # '[' -z 67933 ']' 00:12:40.497 16:22:00 -- common/autotest_common.sh@940 -- # kill -0 67933 00:12:40.497 16:22:00 -- common/autotest_common.sh@941 -- # uname 00:12:40.497 16:22:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:40.497 16:22:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67933 00:12:40.497 16:22:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:40.497 killing process with pid 67933 00:12:40.497 16:22:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:40.497 16:22:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67933' 00:12:40.498 16:22:00 -- common/autotest_common.sh@955 -- # kill 67933 00:12:40.498 16:22:00 -- common/autotest_common.sh@960 -- # wait 67933 00:12:41.440 16:22:00 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:12:41.440 00:12:41.440 real 0m2.031s 00:12:41.440 user 0m4.772s 00:12:41.440 sys 0m0.292s 00:12:41.440 16:22:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:41.440 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:12:41.440 ************************************ 00:12:41.440 END TEST bdev_bounds 00:12:41.440 ************************************ 00:12:41.440 16:22:00 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:41.440 16:22:00 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:12:41.440 16:22:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:41.440 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:12:41.440 ************************************ 00:12:41.440 START TEST bdev_nbd 00:12:41.440 ************************************ 00:12:41.440 16:22:00 -- common/autotest_common.sh@1114 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '' 00:12:41.440 16:22:00 -- bdev/blockdev.sh@298 -- # uname -s 00:12:41.440 16:22:00 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:12:41.440 16:22:00 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:41.440 16:22:00 -- bdev/blockdev.sh@301 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:12:41.440 16:22:00 -- bdev/blockdev.sh@302 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:41.440 16:22:00 -- bdev/blockdev.sh@302 -- # local bdev_all 00:12:41.440 16:22:00 -- bdev/blockdev.sh@303 -- # local bdev_num=6 00:12:41.440 16:22:00 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:12:41.440 16:22:00 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:12:41.440 16:22:00 -- bdev/blockdev.sh@309 -- # local nbd_all 00:12:41.440 16:22:00 -- bdev/blockdev.sh@310 -- # bdev_num=6 00:12:41.440 16:22:00 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:41.440 16:22:00 -- bdev/blockdev.sh@312 -- # local nbd_list 00:12:41.440 16:22:00 -- bdev/blockdev.sh@313 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:41.440 16:22:00 -- bdev/blockdev.sh@313 -- # local bdev_list 00:12:41.440 16:22:00 -- bdev/blockdev.sh@316 -- # nbd_pid=67997 00:12:41.440 16:22:00 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:12:41.440 16:22:00 -- bdev/blockdev.sh@318 -- # waitforlisten 67997 /var/tmp/spdk-nbd.sock 00:12:41.440 16:22:00 -- common/autotest_common.sh@829 -- # '[' -z 67997 ']' 00:12:41.440 16:22:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:12:41.440 16:22:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:41.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:12:41.440 16:22:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:12:41.440 16:22:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:41.440 16:22:00 -- common/autotest_common.sh@10 -- # set +x 00:12:41.440 16:22:00 -- bdev/blockdev.sh@315 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:12:41.440 [2024-11-09 16:22:01.011723] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:12:41.440 [2024-11-09 16:22:01.011839] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.440 [2024-11-09 16:22:01.159696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.701 [2024-11-09 16:22:01.322764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.273 16:22:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:42.273 16:22:01 -- common/autotest_common.sh@862 -- # return 0 00:12:42.273 16:22:01 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:42.273 16:22:01 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.273 16:22:01 -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:42.273 16:22:01 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:12:42.273 16:22:01 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' 00:12:42.273 16:22:01 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:42.273 16:22:01 -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:42.273 16:22:01 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:12:42.273 16:22:01 -- bdev/nbd_common.sh@24 -- # local i 00:12:42.273 16:22:01 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:12:42.273 16:22:01 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:12:42.273 16:22:01 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:42.273 16:22:01 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:12:42.273 16:22:02 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:12:42.273 16:22:02 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:12:42.273 16:22:02 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:12:42.273 16:22:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:42.273 16:22:02 -- common/autotest_common.sh@867 -- # local i 00:12:42.273 16:22:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:42.273 16:22:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:42.274 16:22:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:42.274 16:22:02 -- common/autotest_common.sh@871 -- # break 00:12:42.274 16:22:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:42.274 16:22:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:42.274 16:22:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.274 1+0 records in 00:12:42.274 1+0 records out 00:12:42.274 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00147809 s, 2.8 MB/s 00:12:42.274 16:22:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.535 16:22:02 -- common/autotest_common.sh@884 -- # size=4096 00:12:42.535 16:22:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.535 16:22:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:42.535 16:22:02 -- common/autotest_common.sh@887 -- # return 0 00:12:42.535 16:22:02 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.535 16:22:02 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:42.535 16:22:02 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:12:42.535 16:22:02 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:12:42.535 16:22:02 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:12:42.535 16:22:02 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:12:42.535 16:22:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:42.535 16:22:02 -- common/autotest_common.sh@867 -- # local i 00:12:42.535 16:22:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:42.535 16:22:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:42.535 16:22:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:42.535 16:22:02 -- common/autotest_common.sh@871 -- # break 00:12:42.535 16:22:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:42.535 16:22:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:42.535 16:22:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.535 1+0 records in 00:12:42.535 1+0 records out 00:12:42.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109493 s, 3.7 MB/s 00:12:42.535 16:22:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.535 16:22:02 -- common/autotest_common.sh@884 -- # size=4096 00:12:42.535 16:22:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.535 16:22:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:42.535 16:22:02 -- common/autotest_common.sh@887 -- # return 0 00:12:42.535 16:22:02 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.535 16:22:02 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:42.535 16:22:02 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 00:12:42.797 16:22:02 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:12:42.797 16:22:02 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:12:42.797 16:22:02 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:12:42.797 16:22:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd2 00:12:42.797 16:22:02 -- common/autotest_common.sh@867 -- # local i 00:12:42.797 16:22:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:42.797 16:22:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:42.797 16:22:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd2 /proc/partitions 00:12:42.797 16:22:02 -- common/autotest_common.sh@871 -- # break 00:12:42.797 16:22:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:42.797 16:22:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:42.797 16:22:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:42.797 1+0 records in 00:12:42.797 1+0 records out 00:12:42.797 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00107608 s, 3.8 MB/s 00:12:42.797 16:22:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.797 16:22:02 -- common/autotest_common.sh@884 -- # size=4096 00:12:42.797 16:22:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:42.797 16:22:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:42.797 16:22:02 -- common/autotest_common.sh@887 -- # return 0 00:12:42.797 16:22:02 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:42.797 16:22:02 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:42.797 16:22:02 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 00:12:43.058 16:22:02 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:12:43.058 16:22:02 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:12:43.058 16:22:02 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:12:43.058 16:22:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd3 00:12:43.058 16:22:02 -- common/autotest_common.sh@867 -- # local i 00:12:43.058 16:22:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:43.058 16:22:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:43.058 16:22:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd3 /proc/partitions 00:12:43.058 16:22:02 -- common/autotest_common.sh@871 -- # break 00:12:43.058 16:22:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:43.058 16:22:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:43.058 16:22:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.058 1+0 records in 00:12:43.058 1+0 records out 00:12:43.058 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000873715 s, 4.7 MB/s 00:12:43.058 16:22:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.058 16:22:02 -- common/autotest_common.sh@884 -- # size=4096 00:12:43.058 16:22:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.058 16:22:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:43.058 16:22:02 -- common/autotest_common.sh@887 -- # return 0 00:12:43.058 16:22:02 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.058 16:22:02 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:43.058 16:22:02 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:12:43.318 16:22:02 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:12:43.318 16:22:02 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:12:43.318 16:22:02 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:12:43.318 16:22:02 -- common/autotest_common.sh@866 -- # local nbd_name=nbd4 00:12:43.318 16:22:02 -- common/autotest_common.sh@867 -- # local i 00:12:43.318 16:22:02 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:43.318 16:22:02 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:43.318 16:22:02 -- common/autotest_common.sh@870 -- # grep -q -w nbd4 /proc/partitions 00:12:43.318 16:22:02 -- common/autotest_common.sh@871 -- # break 00:12:43.318 16:22:02 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:43.318 16:22:02 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:43.318 16:22:02 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.318 1+0 records in 00:12:43.318 1+0 records out 00:12:43.318 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0015984 s, 2.6 MB/s 00:12:43.318 16:22:02 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.318 16:22:02 -- common/autotest_common.sh@884 -- # size=4096 00:12:43.318 16:22:02 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.318 16:22:02 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:43.318 16:22:02 -- common/autotest_common.sh@887 -- # return 0 00:12:43.318 16:22:02 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.318 16:22:02 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:43.318 16:22:02 -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:12:43.579 16:22:03 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:12:43.580 16:22:03 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:12:43.580 16:22:03 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:12:43.580 16:22:03 -- common/autotest_common.sh@866 -- # local nbd_name=nbd5 00:12:43.580 16:22:03 -- common/autotest_common.sh@867 -- # local i 00:12:43.580 16:22:03 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:43.580 16:22:03 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:43.580 16:22:03 -- common/autotest_common.sh@870 -- # grep -q -w nbd5 /proc/partitions 00:12:43.580 16:22:03 -- common/autotest_common.sh@871 -- # break 00:12:43.580 16:22:03 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:43.580 16:22:03 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:43.580 16:22:03 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:43.580 1+0 records in 00:12:43.580 1+0 records out 00:12:43.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00125536 s, 3.3 MB/s 00:12:43.580 16:22:03 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.580 16:22:03 -- common/autotest_common.sh@884 -- # size=4096 00:12:43.580 16:22:03 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:43.580 16:22:03 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:43.580 16:22:03 -- common/autotest_common.sh@887 -- # return 0 00:12:43.580 16:22:03 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:12:43.580 16:22:03 -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:12:43.580 16:22:03 -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:43.841 16:22:03 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:12:43.841 { 00:12:43.842 "nbd_device": "/dev/nbd0", 00:12:43.842 "bdev_name": "nvme0n1" 00:12:43.842 }, 00:12:43.842 { 00:12:43.842 "nbd_device": "/dev/nbd1", 00:12:43.842 "bdev_name": "nvme1n1" 00:12:43.842 }, 00:12:43.842 { 00:12:43.842 "nbd_device": "/dev/nbd2", 00:12:43.842 "bdev_name": "nvme1n2" 00:12:43.842 }, 00:12:43.842 { 00:12:43.842 "nbd_device": "/dev/nbd3", 00:12:43.842 "bdev_name": "nvme1n3" 00:12:43.842 }, 00:12:43.842 { 00:12:43.842 "nbd_device": "/dev/nbd4", 00:12:43.842 "bdev_name": "nvme2n1" 00:12:43.842 }, 00:12:43.842 { 00:12:43.842 "nbd_device": "/dev/nbd5", 00:12:43.842 "bdev_name": "nvme3n1" 00:12:43.842 } 00:12:43.842 ]' 00:12:43.842 16:22:03 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:12:43.842 16:22:03 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:12:43.842 16:22:03 -- bdev/nbd_common.sh@119 -- # echo '[ 00:12:43.842 { 00:12:43.842 "nbd_device": "/dev/nbd0", 00:12:43.842 "bdev_name": "nvme0n1" 00:12:43.842 }, 00:12:43.842 { 00:12:43.842 "nbd_device": "/dev/nbd1", 00:12:43.842 "bdev_name": "nvme1n1" 00:12:43.842 }, 00:12:43.842 { 00:12:43.842 "nbd_device": "/dev/nbd2", 00:12:43.842 "bdev_name": "nvme1n2" 00:12:43.842 }, 00:12:43.842 { 00:12:43.842 "nbd_device": "/dev/nbd3", 00:12:43.842 "bdev_name": "nvme1n3" 00:12:43.842 }, 00:12:43.842 { 00:12:43.842 "nbd_device": "/dev/nbd4", 00:12:43.842 "bdev_name": "nvme2n1" 00:12:43.842 }, 00:12:43.842 { 00:12:43.842 "nbd_device": "/dev/nbd5", 00:12:43.842 "bdev_name": "nvme3n1" 00:12:43.842 } 00:12:43.842 ]' 00:12:43.842 16:22:03 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:12:43.842 16:22:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:43.842 16:22:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:12:43.842 16:22:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:43.842 16:22:03 -- bdev/nbd_common.sh@51 -- # local i 00:12:43.842 16:22:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:43.842 16:22:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:44.103 16:22:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:44.103 16:22:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:44.103 16:22:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:44.103 16:22:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.103 16:22:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.103 16:22:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:44.103 16:22:03 -- bdev/nbd_common.sh@41 -- # break 00:12:44.103 16:22:03 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.103 16:22:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.103 16:22:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:44.365 16:22:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:44.365 16:22:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:44.365 16:22:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:44.365 16:22:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.365 16:22:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.365 16:22:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:44.365 16:22:03 -- bdev/nbd_common.sh@41 -- # break 00:12:44.365 16:22:03 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.365 16:22:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.365 16:22:03 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:12:44.365 16:22:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:12:44.365 16:22:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:12:44.365 16:22:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:12:44.365 16:22:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.365 16:22:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.365 16:22:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:12:44.365 16:22:04 -- bdev/nbd_common.sh@41 -- # break 00:12:44.365 16:22:04 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.365 16:22:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.365 16:22:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:12:44.626 16:22:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:12:44.626 16:22:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:12:44.626 16:22:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:12:44.626 16:22:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.626 16:22:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.626 16:22:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:12:44.626 16:22:04 -- bdev/nbd_common.sh@41 -- # break 00:12:44.626 16:22:04 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.626 16:22:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.626 16:22:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:12:44.887 16:22:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:12:44.887 16:22:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:12:44.887 16:22:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:12:44.887 16:22:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:44.887 16:22:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:44.887 16:22:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:12:44.887 16:22:04 -- bdev/nbd_common.sh@41 -- # break 00:12:44.887 16:22:04 -- bdev/nbd_common.sh@45 -- # return 0 00:12:44.887 16:22:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:44.887 16:22:04 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:12:45.149 16:22:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:12:45.149 16:22:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:12:45.149 16:22:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:12:45.149 16:22:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:45.149 16:22:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:45.149 16:22:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:12:45.149 16:22:04 -- bdev/nbd_common.sh@41 -- # break 00:12:45.149 16:22:04 -- bdev/nbd_common.sh@45 -- # return 0 00:12:45.149 16:22:04 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:45.149 16:22:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.149 16:22:04 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@65 -- # true 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@65 -- # count=0 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@122 -- # count=0 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@127 -- # return 0 00:12:45.411 16:22:04 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme1n2 nvme1n3 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme1n2' 'nvme1n3' 'nvme2n1' 'nvme3n1') 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@12 -- # local i 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:45.411 16:22:04 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:12:45.411 /dev/nbd0 00:12:45.411 16:22:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:12:45.411 16:22:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:12:45.411 16:22:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:12:45.411 16:22:05 -- common/autotest_common.sh@867 -- # local i 00:12:45.411 16:22:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:45.411 16:22:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:45.411 16:22:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:12:45.673 16:22:05 -- common/autotest_common.sh@871 -- # break 00:12:45.673 16:22:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:45.673 16:22:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:45.673 16:22:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.673 1+0 records in 00:12:45.673 1+0 records out 00:12:45.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000901497 s, 4.5 MB/s 00:12:45.673 16:22:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.673 16:22:05 -- common/autotest_common.sh@884 -- # size=4096 00:12:45.673 16:22:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.673 16:22:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:45.673 16:22:05 -- common/autotest_common.sh@887 -- # return 0 00:12:45.673 16:22:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.673 16:22:05 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:45.673 16:22:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:12:45.673 /dev/nbd1 00:12:45.673 16:22:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:12:45.673 16:22:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:12:45.673 16:22:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:12:45.673 16:22:05 -- common/autotest_common.sh@867 -- # local i 00:12:45.673 16:22:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:45.673 16:22:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:45.673 16:22:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:12:45.673 16:22:05 -- common/autotest_common.sh@871 -- # break 00:12:45.673 16:22:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:45.673 16:22:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:45.673 16:22:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.673 1+0 records in 00:12:45.673 1+0 records out 00:12:45.673 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000824669 s, 5.0 MB/s 00:12:45.673 16:22:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.673 16:22:05 -- common/autotest_common.sh@884 -- # size=4096 00:12:45.673 16:22:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.673 16:22:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:45.673 16:22:05 -- common/autotest_common.sh@887 -- # return 0 00:12:45.673 16:22:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.673 16:22:05 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:45.673 16:22:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n2 /dev/nbd10 00:12:45.935 /dev/nbd10 00:12:45.935 16:22:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:12:45.935 16:22:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:12:45.935 16:22:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd10 00:12:45.935 16:22:05 -- common/autotest_common.sh@867 -- # local i 00:12:45.935 16:22:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:45.935 16:22:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:45.935 16:22:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd10 /proc/partitions 00:12:45.935 16:22:05 -- common/autotest_common.sh@871 -- # break 00:12:45.935 16:22:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:45.935 16:22:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:45.935 16:22:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:45.935 1+0 records in 00:12:45.935 1+0 records out 00:12:45.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010921 s, 3.8 MB/s 00:12:45.935 16:22:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.935 16:22:05 -- common/autotest_common.sh@884 -- # size=4096 00:12:45.935 16:22:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:45.935 16:22:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:45.935 16:22:05 -- common/autotest_common.sh@887 -- # return 0 00:12:45.935 16:22:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:45.935 16:22:05 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:45.935 16:22:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n3 /dev/nbd11 00:12:46.197 /dev/nbd11 00:12:46.197 16:22:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:12:46.197 16:22:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:12:46.197 16:22:05 -- common/autotest_common.sh@866 -- # local nbd_name=nbd11 00:12:46.197 16:22:05 -- common/autotest_common.sh@867 -- # local i 00:12:46.197 16:22:05 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:46.197 16:22:05 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:46.197 16:22:05 -- common/autotest_common.sh@870 -- # grep -q -w nbd11 /proc/partitions 00:12:46.197 16:22:05 -- common/autotest_common.sh@871 -- # break 00:12:46.197 16:22:05 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:46.197 16:22:05 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:46.197 16:22:05 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.197 1+0 records in 00:12:46.197 1+0 records out 00:12:46.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00161504 s, 2.5 MB/s 00:12:46.197 16:22:05 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.197 16:22:05 -- common/autotest_common.sh@884 -- # size=4096 00:12:46.197 16:22:05 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.197 16:22:05 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:46.197 16:22:05 -- common/autotest_common.sh@887 -- # return 0 00:12:46.197 16:22:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.197 16:22:05 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:46.197 16:22:05 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:12:46.459 /dev/nbd12 00:12:46.459 16:22:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:12:46.459 16:22:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:12:46.459 16:22:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd12 00:12:46.459 16:22:06 -- common/autotest_common.sh@867 -- # local i 00:12:46.459 16:22:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:46.459 16:22:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:46.459 16:22:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd12 /proc/partitions 00:12:46.459 16:22:06 -- common/autotest_common.sh@871 -- # break 00:12:46.459 16:22:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:46.459 16:22:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:46.459 16:22:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.459 1+0 records in 00:12:46.459 1+0 records out 00:12:46.459 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00145194 s, 2.8 MB/s 00:12:46.459 16:22:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.459 16:22:06 -- common/autotest_common.sh@884 -- # size=4096 00:12:46.459 16:22:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.459 16:22:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:46.459 16:22:06 -- common/autotest_common.sh@887 -- # return 0 00:12:46.459 16:22:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.459 16:22:06 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:46.459 16:22:06 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:12:46.719 /dev/nbd13 00:12:46.719 16:22:06 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:12:46.719 16:22:06 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:12:46.719 16:22:06 -- common/autotest_common.sh@866 -- # local nbd_name=nbd13 00:12:46.719 16:22:06 -- common/autotest_common.sh@867 -- # local i 00:12:46.719 16:22:06 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:12:46.719 16:22:06 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:12:46.719 16:22:06 -- common/autotest_common.sh@870 -- # grep -q -w nbd13 /proc/partitions 00:12:46.719 16:22:06 -- common/autotest_common.sh@871 -- # break 00:12:46.719 16:22:06 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:12:46.719 16:22:06 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:12:46.719 16:22:06 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:12:46.719 1+0 records in 00:12:46.719 1+0 records out 00:12:46.719 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00114831 s, 3.6 MB/s 00:12:46.719 16:22:06 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.719 16:22:06 -- common/autotest_common.sh@884 -- # size=4096 00:12:46.719 16:22:06 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:12:46.719 16:22:06 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:12:46.719 16:22:06 -- common/autotest_common.sh@887 -- # return 0 00:12:46.719 16:22:06 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:12:46.719 16:22:06 -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:12:46.719 16:22:06 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:46.719 16:22:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:46.719 16:22:06 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:12:46.981 { 00:12:46.981 "nbd_device": "/dev/nbd0", 00:12:46.981 "bdev_name": "nvme0n1" 00:12:46.981 }, 00:12:46.981 { 00:12:46.981 "nbd_device": "/dev/nbd1", 00:12:46.981 "bdev_name": "nvme1n1" 00:12:46.981 }, 00:12:46.981 { 00:12:46.981 "nbd_device": "/dev/nbd10", 00:12:46.981 "bdev_name": "nvme1n2" 00:12:46.981 }, 00:12:46.981 { 00:12:46.981 "nbd_device": "/dev/nbd11", 00:12:46.981 "bdev_name": "nvme1n3" 00:12:46.981 }, 00:12:46.981 { 00:12:46.981 "nbd_device": "/dev/nbd12", 00:12:46.981 "bdev_name": "nvme2n1" 00:12:46.981 }, 00:12:46.981 { 00:12:46.981 "nbd_device": "/dev/nbd13", 00:12:46.981 "bdev_name": "nvme3n1" 00:12:46.981 } 00:12:46.981 ]' 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@64 -- # echo '[ 00:12:46.981 { 00:12:46.981 "nbd_device": "/dev/nbd0", 00:12:46.981 "bdev_name": "nvme0n1" 00:12:46.981 }, 00:12:46.981 { 00:12:46.981 "nbd_device": "/dev/nbd1", 00:12:46.981 "bdev_name": "nvme1n1" 00:12:46.981 }, 00:12:46.981 { 00:12:46.981 "nbd_device": "/dev/nbd10", 00:12:46.981 "bdev_name": "nvme1n2" 00:12:46.981 }, 00:12:46.981 { 00:12:46.981 "nbd_device": "/dev/nbd11", 00:12:46.981 "bdev_name": "nvme1n3" 00:12:46.981 }, 00:12:46.981 { 00:12:46.981 "nbd_device": "/dev/nbd12", 00:12:46.981 "bdev_name": "nvme2n1" 00:12:46.981 }, 00:12:46.981 { 00:12:46.981 "nbd_device": "/dev/nbd13", 00:12:46.981 "bdev_name": "nvme3n1" 00:12:46.981 } 00:12:46.981 ]' 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:12:46.981 /dev/nbd1 00:12:46.981 /dev/nbd10 00:12:46.981 /dev/nbd11 00:12:46.981 /dev/nbd12 00:12:46.981 /dev/nbd13' 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:12:46.981 /dev/nbd1 00:12:46.981 /dev/nbd10 00:12:46.981 /dev/nbd11 00:12:46.981 /dev/nbd12 00:12:46.981 /dev/nbd13' 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@65 -- # count=6 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@66 -- # echo 6 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@95 -- # count=6 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@71 -- # local operation=write 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:12:46.981 256+0 records in 00:12:46.981 256+0 records out 00:12:46.981 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00610558 s, 172 MB/s 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:46.981 16:22:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:12:47.243 256+0 records in 00:12:47.243 256+0 records out 00:12:47.243 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.247131 s, 4.2 MB/s 00:12:47.243 16:22:06 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:47.243 16:22:06 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:12:47.505 256+0 records in 00:12:47.505 256+0 records out 00:12:47.505 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.251372 s, 4.2 MB/s 00:12:47.505 16:22:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:47.505 16:22:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:12:47.767 256+0 records in 00:12:47.767 256+0 records out 00:12:47.767 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.246531 s, 4.3 MB/s 00:12:47.767 16:22:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:47.767 16:22:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:12:48.028 256+0 records in 00:12:48.028 256+0 records out 00:12:48.028 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245097 s, 4.3 MB/s 00:12:48.028 16:22:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.028 16:22:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:12:48.290 256+0 records in 00:12:48.290 256+0 records out 00:12:48.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.279047 s, 3.8 MB/s 00:12:48.290 16:22:07 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:12:48.290 16:22:07 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:12:48.290 256+0 records in 00:12:48.290 256+0 records out 00:12:48.290 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.176951 s, 5.9 MB/s 00:12:48.290 16:22:08 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:12:48.290 16:22:08 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:48.290 16:22:08 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:12:48.290 16:22:08 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:12:48.290 16:22:08 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:48.290 16:22:08 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:12:48.290 16:22:08 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:12:48.290 16:22:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:48.290 16:22:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@51 -- # local i 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:48.551 16:22:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:48.552 16:22:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:48.552 16:22:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:48.552 16:22:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.552 16:22:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.552 16:22:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:48.552 16:22:08 -- bdev/nbd_common.sh@41 -- # break 00:12:48.552 16:22:08 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.552 16:22:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.552 16:22:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:12:48.813 16:22:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:12:48.813 16:22:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:12:48.813 16:22:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:12:48.813 16:22:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:48.813 16:22:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:48.813 16:22:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:12:48.813 16:22:08 -- bdev/nbd_common.sh@41 -- # break 00:12:48.813 16:22:08 -- bdev/nbd_common.sh@45 -- # return 0 00:12:48.813 16:22:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:48.813 16:22:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:12:49.074 16:22:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:12:49.074 16:22:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:12:49.074 16:22:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:12:49.074 16:22:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.074 16:22:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.074 16:22:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:12:49.074 16:22:08 -- bdev/nbd_common.sh@41 -- # break 00:12:49.074 16:22:08 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.074 16:22:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.074 16:22:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:12:49.335 16:22:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:12:49.335 16:22:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:12:49.335 16:22:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:12:49.335 16:22:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.335 16:22:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.335 16:22:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:12:49.335 16:22:08 -- bdev/nbd_common.sh@41 -- # break 00:12:49.335 16:22:08 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.335 16:22:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.335 16:22:08 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@41 -- # break 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@41 -- # break 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@45 -- # return 0 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.595 16:22:09 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@65 -- # echo '' 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@65 -- # true 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@65 -- # count=0 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@66 -- # echo 0 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@104 -- # count=0 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@109 -- # return 0 00:12:49.856 16:22:09 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:12:49.856 16:22:09 -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:12:50.116 malloc_lvol_verify 00:12:50.116 16:22:09 -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:12:50.116 243046d5-8c2f-41ea-baf7-6626b0a5ad38 00:12:50.376 16:22:09 -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:12:50.376 64bc7d59-08eb-4ba2-b5a9-7b741b504a91 00:12:50.376 16:22:10 -- bdev/nbd_common.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:12:50.636 /dev/nbd0 00:12:50.636 16:22:10 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:12:50.636 mke2fs 1.47.0 (5-Feb-2023) 00:12:50.636 Discarding device blocks: 0/4096 done 00:12:50.636 Creating filesystem with 4096 1k blocks and 1024 inodes 00:12:50.636 00:12:50.636 Allocating group tables: 0/1 done 00:12:50.636 Writing inode tables: 0/1 done 00:12:50.636 Creating journal (1024 blocks): done 00:12:50.636 Writing superblocks and filesystem accounting information: 0/1 done 00:12:50.636 00:12:50.636 16:22:10 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:12:50.636 16:22:10 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:12:50.636 16:22:10 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:12:50.636 16:22:10 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:12:50.636 16:22:10 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:12:50.636 16:22:10 -- bdev/nbd_common.sh@51 -- # local i 00:12:50.636 16:22:10 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:12:50.636 16:22:10 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:12:50.898 16:22:10 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:12:50.898 16:22:10 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:12:50.898 16:22:10 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:12:50.898 16:22:10 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:12:50.898 16:22:10 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:12:50.898 16:22:10 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:12:50.898 16:22:10 -- bdev/nbd_common.sh@41 -- # break 00:12:50.898 16:22:10 -- bdev/nbd_common.sh@45 -- # return 0 00:12:50.898 16:22:10 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:12:50.898 16:22:10 -- bdev/nbd_common.sh@147 -- # return 0 00:12:50.898 16:22:10 -- bdev/blockdev.sh@324 -- # killprocess 67997 00:12:50.898 16:22:10 -- common/autotest_common.sh@936 -- # '[' -z 67997 ']' 00:12:50.898 16:22:10 -- common/autotest_common.sh@940 -- # kill -0 67997 00:12:50.898 16:22:10 -- common/autotest_common.sh@941 -- # uname 00:12:50.898 16:22:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:50.898 16:22:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67997 00:12:50.898 16:22:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:50.898 16:22:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:50.898 16:22:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67997' 00:12:50.898 killing process with pid 67997 00:12:50.898 16:22:10 -- common/autotest_common.sh@955 -- # kill 67997 00:12:50.898 16:22:10 -- common/autotest_common.sh@960 -- # wait 67997 00:12:51.471 16:22:11 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:12:51.471 00:12:51.471 real 0m10.264s 00:12:51.471 user 0m13.598s 00:12:51.471 sys 0m3.583s 00:12:51.471 16:22:11 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:12:51.471 16:22:11 -- common/autotest_common.sh@10 -- # set +x 00:12:51.471 ************************************ 00:12:51.471 END TEST bdev_nbd 00:12:51.471 ************************************ 00:12:51.731 16:22:11 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:12:51.731 16:22:11 -- bdev/blockdev.sh@762 -- # '[' xnvme = nvme ']' 00:12:51.731 16:22:11 -- bdev/blockdev.sh@762 -- # '[' xnvme = gpt ']' 00:12:51.731 16:22:11 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:12:51.731 16:22:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:51.731 16:22:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:51.731 16:22:11 -- common/autotest_common.sh@10 -- # set +x 00:12:51.731 ************************************ 00:12:51.731 START TEST bdev_fio 00:12:51.731 ************************************ 00:12:51.731 16:22:11 -- common/autotest_common.sh@1114 -- # fio_test_suite '' 00:12:51.731 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:12:51.731 16:22:11 -- bdev/blockdev.sh@329 -- # local env_context 00:12:51.731 16:22:11 -- bdev/blockdev.sh@333 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:12:51.731 16:22:11 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:12:51.731 16:22:11 -- bdev/blockdev.sh@337 -- # echo '' 00:12:51.731 16:22:11 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:12:51.731 16:22:11 -- bdev/blockdev.sh@337 -- # env_context= 00:12:51.731 16:22:11 -- bdev/blockdev.sh@338 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:12:51.731 16:22:11 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:51.731 16:22:11 -- common/autotest_common.sh@1270 -- # local workload=verify 00:12:51.731 16:22:11 -- common/autotest_common.sh@1271 -- # local bdev_type=AIO 00:12:51.731 16:22:11 -- common/autotest_common.sh@1272 -- # local env_context= 00:12:51.731 16:22:11 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:12:51.731 16:22:11 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:12:51.731 16:22:11 -- common/autotest_common.sh@1280 -- # '[' -z verify ']' 00:12:51.731 16:22:11 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:12:51.731 16:22:11 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:12:51.731 16:22:11 -- common/autotest_common.sh@1290 -- # cat 00:12:51.731 16:22:11 -- common/autotest_common.sh@1302 -- # '[' verify == verify ']' 00:12:51.731 16:22:11 -- common/autotest_common.sh@1303 -- # cat 00:12:51.731 16:22:11 -- common/autotest_common.sh@1312 -- # '[' AIO == AIO ']' 00:12:51.731 16:22:11 -- common/autotest_common.sh@1313 -- # /usr/src/fio/fio --version 00:12:51.731 16:22:11 -- common/autotest_common.sh@1313 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:12:51.731 16:22:11 -- common/autotest_common.sh@1314 -- # echo serialize_overlap=1 00:12:51.731 16:22:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:51.731 16:22:11 -- bdev/blockdev.sh@340 -- # echo '[job_nvme0n1]' 00:12:51.731 16:22:11 -- bdev/blockdev.sh@341 -- # echo filename=nvme0n1 00:12:51.731 16:22:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:51.731 16:22:11 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n1]' 00:12:51.731 16:22:11 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n1 00:12:51.731 16:22:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:51.731 16:22:11 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n2]' 00:12:51.731 16:22:11 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n2 00:12:51.731 16:22:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:51.732 16:22:11 -- bdev/blockdev.sh@340 -- # echo '[job_nvme1n3]' 00:12:51.732 16:22:11 -- bdev/blockdev.sh@341 -- # echo filename=nvme1n3 00:12:51.732 16:22:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:51.732 16:22:11 -- bdev/blockdev.sh@340 -- # echo '[job_nvme2n1]' 00:12:51.732 16:22:11 -- bdev/blockdev.sh@341 -- # echo filename=nvme2n1 00:12:51.732 16:22:11 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:12:51.732 16:22:11 -- bdev/blockdev.sh@340 -- # echo '[job_nvme3n1]' 00:12:51.732 16:22:11 -- bdev/blockdev.sh@341 -- # echo filename=nvme3n1 00:12:51.732 16:22:11 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:12:51.732 16:22:11 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:51.732 16:22:11 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:12:51.732 16:22:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:51.732 16:22:11 -- common/autotest_common.sh@10 -- # set +x 00:12:51.732 ************************************ 00:12:51.732 START TEST bdev_fio_rw_verify 00:12:51.732 ************************************ 00:12:51.732 16:22:11 -- common/autotest_common.sh@1114 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:51.732 16:22:11 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:51.732 16:22:11 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:12:51.732 16:22:11 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:51.732 16:22:11 -- common/autotest_common.sh@1328 -- # local sanitizers 00:12:51.732 16:22:11 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:51.732 16:22:11 -- common/autotest_common.sh@1330 -- # shift 00:12:51.732 16:22:11 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:12:51.732 16:22:11 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:12:51.732 16:22:11 -- common/autotest_common.sh@1334 -- # grep libasan 00:12:51.732 16:22:11 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:51.732 16:22:11 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:12:51.732 16:22:11 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:51.732 16:22:11 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:51.732 16:22:11 -- common/autotest_common.sh@1336 -- # break 00:12:51.732 16:22:11 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:51.732 16:22:11 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:12:51.992 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:51.992 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:51.992 job_nvme1n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:51.992 job_nvme1n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:51.992 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:51.992 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:12:51.992 fio-3.35 00:12:51.992 Starting 6 threads 00:13:04.235 00:13:04.235 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=68400: Sat Nov 9 16:22:22 2024 00:13:04.235 read: IOPS=12.1k, BW=47.1MiB/s (49.4MB/s)(471MiB/10002msec) 00:13:04.235 slat (usec): min=2, max=1440, avg= 6.43, stdev= 9.96 00:13:04.235 clat (usec): min=111, max=10049, avg=1682.39, stdev=816.78 00:13:04.235 lat (usec): min=114, max=10063, avg=1688.82, stdev=817.33 00:13:04.235 clat percentiles (usec): 00:13:04.235 | 50.000th=[ 1582], 99.000th=[ 4228], 99.900th=[ 5473], 99.990th=[ 7308], 00:13:04.235 | 99.999th=[10028] 00:13:04.235 write: IOPS=12.3k, BW=48.1MiB/s (50.4MB/s)(481MiB/10002msec); 0 zone resets 00:13:04.235 slat (usec): min=9, max=6616, avg=41.33, stdev=149.64 00:13:04.235 clat (usec): min=85, max=9080, avg=1923.91, stdev=897.19 00:13:04.235 lat (usec): min=106, max=9118, avg=1965.24, stdev=909.68 00:13:04.235 clat percentiles (usec): 00:13:04.235 | 50.000th=[ 1778], 99.000th=[ 4686], 99.900th=[ 6587], 99.990th=[ 8356], 00:13:04.235 | 99.999th=[ 9110] 00:13:04.235 bw ( KiB/s): min=40485, max=62971, per=100.00%, avg=49422.89, stdev=1027.19, samples=114 00:13:04.235 iops : min=10120, max=15742, avg=12354.58, stdev=256.80, samples=114 00:13:04.235 lat (usec) : 100=0.01%, 250=0.41%, 500=2.20%, 750=4.87%, 1000=8.31% 00:13:04.235 lat (msec) : 2=50.29%, 4=31.78%, 10=2.15%, 20=0.01% 00:13:04.235 cpu : usr=48.53%, sys=29.20%, ctx=5209, majf=0, minf=14777 00:13:04.235 IO depths : 1=11.6%, 2=24.0%, 4=51.0%, 8=13.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:04.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.235 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:04.236 issued rwts: total=120592,123068,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:04.236 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:04.236 00:13:04.236 Run status group 0 (all jobs): 00:13:04.236 READ: bw=47.1MiB/s (49.4MB/s), 47.1MiB/s-47.1MiB/s (49.4MB/s-49.4MB/s), io=471MiB (494MB), run=10002-10002msec 00:13:04.236 WRITE: bw=48.1MiB/s (50.4MB/s), 48.1MiB/s-48.1MiB/s (50.4MB/s-50.4MB/s), io=481MiB (504MB), run=10002-10002msec 00:13:04.236 ----------------------------------------------------- 00:13:04.236 Suppressions used: 00:13:04.236 count bytes template 00:13:04.236 6 48 /usr/src/fio/parse.c 00:13:04.236 2411 231456 /usr/src/fio/iolog.c 00:13:04.236 1 8 libtcmalloc_minimal.so 00:13:04.236 1 904 libcrypto.so 00:13:04.236 ----------------------------------------------------- 00:13:04.236 00:13:04.236 00:13:04.236 real 0m12.093s 00:13:04.236 user 0m30.774s 00:13:04.236 sys 0m17.967s 00:13:04.236 ************************************ 00:13:04.236 END TEST bdev_fio_rw_verify 00:13:04.236 16:22:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:04.236 16:22:23 -- common/autotest_common.sh@10 -- # set +x 00:13:04.236 ************************************ 00:13:04.236 16:22:23 -- bdev/blockdev.sh@348 -- # rm -f 00:13:04.236 16:22:23 -- bdev/blockdev.sh@349 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:04.236 16:22:23 -- bdev/blockdev.sh@352 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:04.236 16:22:23 -- common/autotest_common.sh@1269 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:04.236 16:22:23 -- common/autotest_common.sh@1270 -- # local workload=trim 00:13:04.236 16:22:23 -- common/autotest_common.sh@1271 -- # local bdev_type= 00:13:04.236 16:22:23 -- common/autotest_common.sh@1272 -- # local env_context= 00:13:04.236 16:22:23 -- common/autotest_common.sh@1273 -- # local fio_dir=/usr/src/fio 00:13:04.236 16:22:23 -- common/autotest_common.sh@1275 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:04.236 16:22:23 -- common/autotest_common.sh@1280 -- # '[' -z trim ']' 00:13:04.236 16:22:23 -- common/autotest_common.sh@1284 -- # '[' -n '' ']' 00:13:04.236 16:22:23 -- common/autotest_common.sh@1288 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:04.236 16:22:23 -- common/autotest_common.sh@1290 -- # cat 00:13:04.236 16:22:23 -- common/autotest_common.sh@1302 -- # '[' trim == verify ']' 00:13:04.236 16:22:23 -- common/autotest_common.sh@1317 -- # '[' trim == trim ']' 00:13:04.236 16:22:23 -- common/autotest_common.sh@1318 -- # echo rw=trimwrite 00:13:04.236 16:22:23 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:04.236 16:22:23 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "515e459e-5e9e-450f-ae23-9809e2ab6e3b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "515e459e-5e9e-450f-ae23-9809e2ab6e3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "e5d7a331-d11d-4625-be7c-a8b6956295cb"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e5d7a331-d11d-4625-be7c-a8b6956295cb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n2",' ' "aliases": [' ' "02adc8be-619c-488b-aa43-d5810a8d3a3b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "02adc8be-619c-488b-aa43-d5810a8d3a3b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n3",' ' "aliases": [' ' "a99aa3c8-a15c-4151-bfbc-632619ca8e70"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a99aa3c8-a15c-4151-bfbc-632619ca8e70",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "65d3d30e-acb8-4241-9924-eaaebc7c1f35"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "65d3d30e-acb8-4241-9924-eaaebc7c1f35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "fefba926-323f-48be-91b7-3df3a4119f70"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "fefba926-323f-48be-91b7-3df3a4119f70",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {}' '}' 00:13:04.236 16:22:23 -- bdev/blockdev.sh@353 -- # [[ -n '' ]] 00:13:04.236 16:22:23 -- bdev/blockdev.sh@359 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:04.236 /home/vagrant/spdk_repo/spdk 00:13:04.236 16:22:23 -- bdev/blockdev.sh@360 -- # popd 00:13:04.236 16:22:23 -- bdev/blockdev.sh@361 -- # trap - SIGINT SIGTERM EXIT 00:13:04.236 16:22:23 -- bdev/blockdev.sh@362 -- # return 0 00:13:04.236 00:13:04.236 real 0m12.260s 00:13:04.236 user 0m30.849s 00:13:04.236 sys 0m18.039s 00:13:04.236 16:22:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:04.236 16:22:23 -- common/autotest_common.sh@10 -- # set +x 00:13:04.236 ************************************ 00:13:04.236 END TEST bdev_fio 00:13:04.236 ************************************ 00:13:04.236 16:22:23 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:04.236 16:22:23 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:04.236 16:22:23 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:04.236 16:22:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:04.236 16:22:23 -- common/autotest_common.sh@10 -- # set +x 00:13:04.236 ************************************ 00:13:04.236 START TEST bdev_verify 00:13:04.236 ************************************ 00:13:04.236 16:22:23 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:04.236 [2024-11-09 16:22:23.671665] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:04.236 [2024-11-09 16:22:23.671810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68575 ] 00:13:04.236 [2024-11-09 16:22:23.826729] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:04.498 [2024-11-09 16:22:24.049994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:04.498 [2024-11-09 16:22:24.050077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.758 Running I/O for 5 seconds... 00:13:10.057 00:13:10.057 Latency(us) 00:13:10.057 [2024-11-09T16:22:29.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.057 [2024-11-09T16:22:29.827Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:10.057 Verification LBA range: start 0x0 length 0x20000 00:13:10.057 nvme0n1 : 5.10 2043.87 7.98 0.00 0.00 62200.96 15526.99 81869.59 00:13:10.057 [2024-11-09T16:22:29.827Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:10.057 Verification LBA range: start 0x20000 length 0x20000 00:13:10.057 nvme0n1 : 5.10 2180.60 8.52 0.00 0.00 58471.71 14619.57 86305.87 00:13:10.057 [2024-11-09T16:22:29.827Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:10.057 Verification LBA range: start 0x0 length 0x80000 00:13:10.057 nvme1n1 : 5.11 1969.10 7.69 0.00 0.00 64583.30 5973.86 89128.96 00:13:10.057 [2024-11-09T16:22:29.827Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:10.057 Verification LBA range: start 0x80000 length 0x80000 00:13:10.057 nvme1n1 : 5.11 1988.17 7.77 0.00 0.00 64096.58 13611.32 78239.90 00:13:10.057 [2024-11-09T16:22:29.827Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:10.057 Verification LBA range: start 0x0 length 0x80000 00:13:10.057 nvme1n2 : 5.09 1919.01 7.50 0.00 0.00 66230.76 15426.17 96388.33 00:13:10.057 [2024-11-09T16:22:29.827Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:10.057 Verification LBA range: start 0x80000 length 0x80000 00:13:10.057 nvme1n2 : 5.10 1945.42 7.60 0.00 0.00 65213.02 18047.61 91952.05 00:13:10.057 [2024-11-09T16:22:29.827Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:10.057 Verification LBA range: start 0x0 length 0x80000 00:13:10.057 nvme1n3 : 5.10 1958.24 7.65 0.00 0.00 64877.96 4058.19 83886.08 00:13:10.057 [2024-11-09T16:22:29.827Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:10.057 Verification LBA range: start 0x80000 length 0x80000 00:13:10.057 nvme1n3 : 5.09 2081.10 8.13 0.00 0.00 60953.48 12754.31 72190.42 00:13:10.057 [2024-11-09T16:22:29.827Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:10.057 Verification LBA range: start 0x0 length 0xbd0bd 00:13:10.057 nvme2n1 : 5.10 1881.59 7.35 0.00 0.00 67424.82 9578.34 85095.98 00:13:10.057 [2024-11-09T16:22:29.827Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:10.057 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:10.057 nvme2n1 : 5.11 1982.98 7.75 0.00 0.00 63902.92 12855.14 85902.57 00:13:10.057 [2024-11-09T16:22:29.827Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:10.057 Verification LBA range: start 0x0 length 0xa0000 00:13:10.057 nvme3n1 : 5.09 2065.89 8.07 0.00 0.00 61339.08 12905.55 85902.57 00:13:10.057 [2024-11-09T16:22:29.827Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:10.057 Verification LBA range: start 0xa0000 length 0xa0000 00:13:10.057 nvme3n1 : 5.11 2125.22 8.30 0.00 0.00 59545.03 6351.95 86709.17 00:13:10.057 [2024-11-09T16:22:29.827Z] =================================================================================================================== 00:13:10.057 [2024-11-09T16:22:29.827Z] Total : 24141.17 94.30 0.00 0.00 63126.80 4058.19 96388.33 00:13:11.000 00:13:11.000 real 0m7.001s 00:13:11.000 user 0m8.975s 00:13:11.000 sys 0m3.078s 00:13:11.000 16:22:30 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:11.000 ************************************ 00:13:11.000 END TEST bdev_verify 00:13:11.000 ************************************ 00:13:11.000 16:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.000 16:22:30 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:11.000 16:22:30 -- common/autotest_common.sh@1087 -- # '[' 16 -le 1 ']' 00:13:11.000 16:22:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:11.000 16:22:30 -- common/autotest_common.sh@10 -- # set +x 00:13:11.000 ************************************ 00:13:11.000 START TEST bdev_verify_big_io 00:13:11.000 ************************************ 00:13:11.000 16:22:30 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:11.000 [2024-11-09 16:22:30.743154] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:11.000 [2024-11-09 16:22:30.743317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68677 ] 00:13:11.261 [2024-11-09 16:22:30.896193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:11.522 [2024-11-09 16:22:31.164376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.522 [2024-11-09 16:22:31.164392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.095 Running I/O for 5 seconds... 00:13:18.685 00:13:18.685 Latency(us) 00:13:18.685 [2024-11-09T16:22:38.455Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:18.685 [2024-11-09T16:22:38.455Z] Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:18.685 Verification LBA range: start 0x0 length 0x2000 00:13:18.685 nvme0n1 : 5.36 275.80 17.24 0.00 0.00 449329.47 39724.90 822728.86 00:13:18.685 [2024-11-09T16:22:38.455Z] Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:18.685 Verification LBA range: start 0x2000 length 0x2000 00:13:18.685 nvme0n1 : 5.69 198.49 12.41 0.00 0.00 618751.82 92758.65 961463.53 00:13:18.685 [2024-11-09T16:22:38.455Z] Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:18.685 Verification LBA range: start 0x0 length 0x8000 00:13:18.685 nvme1n1 : 5.48 269.87 16.87 0.00 0.00 448050.13 163739.18 613013.66 00:13:18.685 [2024-11-09T16:22:38.455Z] Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:18.685 Verification LBA range: start 0x8000 length 0x8000 00:13:18.685 nvme1n1 : 5.69 198.40 12.40 0.00 0.00 594470.86 90742.15 822728.86 00:13:18.685 [2024-11-09T16:22:38.455Z] Job: nvme1n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:18.685 Verification LBA range: start 0x0 length 0x8000 00:13:18.685 nvme1n2 : 5.57 298.05 18.63 0.00 0.00 396944.96 22887.19 496863.70 00:13:18.685 [2024-11-09T16:22:38.455Z] Job: nvme1n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:18.685 Verification LBA range: start 0x8000 length 0x8000 00:13:18.685 nvme1n2 : 5.73 199.28 12.45 0.00 0.00 579479.85 80659.69 671088.64 00:13:18.685 [2024-11-09T16:22:38.455Z] Job: nvme1n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:18.685 Verification LBA range: start 0x0 length 0x8000 00:13:18.685 nvme1n3 : 5.62 295.08 18.44 0.00 0.00 394370.83 79046.50 471052.60 00:13:18.685 [2024-11-09T16:22:38.455Z] Job: nvme1n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:18.685 Verification LBA range: start 0x8000 length 0x8000 00:13:18.685 nvme1n3 : 5.80 239.63 14.98 0.00 0.00 463644.78 64931.05 654956.70 00:13:18.685 [2024-11-09T16:22:38.455Z] Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:18.685 Verification LBA range: start 0x0 length 0xbd0b 00:13:18.685 nvme2n1 : 5.65 371.79 23.24 0.00 0.00 309888.61 46580.97 330704.74 00:13:18.685 [2024-11-09T16:22:38.455Z] Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:18.685 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:18.685 nvme2n1 : 5.88 303.25 18.95 0.00 0.00 358718.92 6503.19 532353.97 00:13:18.685 [2024-11-09T16:22:38.455Z] Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:18.685 Verification LBA range: start 0x0 length 0xa000 00:13:18.685 nvme3n1 : 5.66 352.30 22.02 0.00 0.00 320777.82 2898.71 471052.60 00:13:18.685 [2024-11-09T16:22:38.455Z] Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:18.685 Verification LBA range: start 0xa000 length 0xa000 00:13:18.685 nvme3n1 : 5.94 307.39 19.21 0.00 0.00 348117.55 1543.88 619466.44 00:13:18.685 [2024-11-09T16:22:38.455Z] =================================================================================================================== 00:13:18.685 [2024-11-09T16:22:38.455Z] Total : 3309.32 206.83 0.00 0.00 419921.10 1543.88 961463.53 00:13:19.259 00:13:19.259 real 0m8.202s 00:13:19.259 user 0m14.346s 00:13:19.259 sys 0m0.780s 00:13:19.259 16:22:38 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:19.259 ************************************ 00:13:19.259 END TEST bdev_verify_big_io 00:13:19.259 ************************************ 00:13:19.259 16:22:38 -- common/autotest_common.sh@10 -- # set +x 00:13:19.259 16:22:38 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:19.259 16:22:38 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:19.259 16:22:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:19.259 16:22:38 -- common/autotest_common.sh@10 -- # set +x 00:13:19.259 ************************************ 00:13:19.259 START TEST bdev_write_zeroes 00:13:19.259 ************************************ 00:13:19.259 16:22:38 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:19.259 [2024-11-09 16:22:39.018529] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:19.259 [2024-11-09 16:22:39.019173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68792 ] 00:13:19.520 [2024-11-09 16:22:39.172987] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.781 [2024-11-09 16:22:39.431505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:20.354 Running I/O for 1 seconds... 00:13:21.335 00:13:21.335 Latency(us) 00:13:21.335 [2024-11-09T16:22:41.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:21.335 [2024-11-09T16:22:41.105Z] Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.335 nvme0n1 : 1.02 11525.54 45.02 0.00 0.00 11094.58 8973.39 27827.59 00:13:21.335 [2024-11-09T16:22:41.105Z] Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.335 nvme1n1 : 1.01 11572.82 45.21 0.00 0.00 11038.32 9023.80 26416.05 00:13:21.335 [2024-11-09T16:22:41.105Z] Job: nvme1n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.335 nvme1n2 : 1.01 11558.93 45.15 0.00 0.00 11036.41 9023.80 24802.86 00:13:21.335 [2024-11-09T16:22:41.105Z] Job: nvme1n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.335 nvme1n3 : 1.01 11545.74 45.10 0.00 0.00 11038.76 9023.80 23996.26 00:13:21.335 [2024-11-09T16:22:41.105Z] Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.335 nvme2n1 : 1.03 12259.35 47.89 0.00 0.00 10360.70 4990.82 18350.08 00:13:21.335 [2024-11-09T16:22:41.105Z] Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:21.335 nvme3n1 : 1.03 11458.32 44.76 0.00 0.00 11098.95 8973.39 27222.65 00:13:21.335 [2024-11-09T16:22:41.105Z] =================================================================================================================== 00:13:21.335 [2024-11-09T16:22:41.105Z] Total : 69920.70 273.13 0.00 0.00 10937.54 4990.82 27827.59 00:13:22.278 00:13:22.278 real 0m2.873s 00:13:22.278 user 0m2.169s 00:13:22.278 sys 0m0.527s 00:13:22.278 16:22:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:22.278 ************************************ 00:13:22.278 END TEST bdev_write_zeroes 00:13:22.278 ************************************ 00:13:22.278 16:22:41 -- common/autotest_common.sh@10 -- # set +x 00:13:22.278 16:22:41 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:22.278 16:22:41 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:22.278 16:22:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:22.278 16:22:41 -- common/autotest_common.sh@10 -- # set +x 00:13:22.278 ************************************ 00:13:22.278 START TEST bdev_json_nonenclosed 00:13:22.278 ************************************ 00:13:22.278 16:22:41 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:22.278 [2024-11-09 16:22:41.960540] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:22.278 [2024-11-09 16:22:41.960672] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68846 ] 00:13:22.539 [2024-11-09 16:22:42.114111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.800 [2024-11-09 16:22:42.335308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.800 [2024-11-09 16:22:42.335502] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:13:22.800 [2024-11-09 16:22:42.335523] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:23.061 00:13:23.061 real 0m0.751s 00:13:23.061 user 0m0.514s 00:13:23.061 sys 0m0.130s 00:13:23.061 16:22:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:23.061 ************************************ 00:13:23.061 END TEST bdev_json_nonenclosed 00:13:23.061 ************************************ 00:13:23.061 16:22:42 -- common/autotest_common.sh@10 -- # set +x 00:13:23.062 16:22:42 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:23.062 16:22:42 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:13:23.062 16:22:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:23.062 16:22:42 -- common/autotest_common.sh@10 -- # set +x 00:13:23.062 ************************************ 00:13:23.062 START TEST bdev_json_nonarray 00:13:23.062 ************************************ 00:13:23.062 16:22:42 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:23.062 [2024-11-09 16:22:42.786512] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:23.062 [2024-11-09 16:22:42.786643] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68877 ] 00:13:23.323 [2024-11-09 16:22:42.938550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:23.584 [2024-11-09 16:22:43.168008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.584 [2024-11-09 16:22:43.168202] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:13:23.584 [2024-11-09 16:22:43.168242] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:23.845 00:13:23.845 real 0m0.765s 00:13:23.845 user 0m0.530s 00:13:23.845 sys 0m0.128s 00:13:23.845 16:22:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:23.845 ************************************ 00:13:23.845 END TEST bdev_json_nonarray 00:13:23.845 ************************************ 00:13:23.845 16:22:43 -- common/autotest_common.sh@10 -- # set +x 00:13:23.845 16:22:43 -- bdev/blockdev.sh@785 -- # [[ xnvme == bdev ]] 00:13:23.845 16:22:43 -- bdev/blockdev.sh@792 -- # [[ xnvme == gpt ]] 00:13:23.845 16:22:43 -- bdev/blockdev.sh@796 -- # [[ xnvme == crypto_sw ]] 00:13:23.845 16:22:43 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:13:23.845 16:22:43 -- bdev/blockdev.sh@809 -- # cleanup 00:13:23.845 16:22:43 -- bdev/blockdev.sh@21 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:13:23.845 16:22:43 -- bdev/blockdev.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:23.845 16:22:43 -- bdev/blockdev.sh@24 -- # [[ xnvme == rbd ]] 00:13:23.845 16:22:43 -- bdev/blockdev.sh@28 -- # [[ xnvme == daos ]] 00:13:23.845 16:22:43 -- bdev/blockdev.sh@32 -- # [[ xnvme = \g\p\t ]] 00:13:23.845 16:22:43 -- bdev/blockdev.sh@38 -- # [[ xnvme == xnvme ]] 00:13:23.845 16:22:43 -- bdev/blockdev.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:24.789 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:27.338 0000:00:09.0 (1b36 0010): nvme -> uio_pci_generic 00:13:27.338 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:13:31.529 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:13:31.529 0000:00:08.0 (1b36 0010): nvme -> uio_pci_generic 00:13:31.790 00:13:31.790 real 1m2.440s 00:13:31.790 user 1m26.381s 00:13:31.790 sys 0m52.591s 00:13:31.790 16:22:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:31.790 ************************************ 00:13:31.790 END TEST blockdev_xnvme 00:13:31.790 ************************************ 00:13:31.790 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:13:31.790 16:22:51 -- spdk/autotest.sh@246 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:31.790 16:22:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:31.790 16:22:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:31.790 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:13:31.790 ************************************ 00:13:31.790 START TEST ublk 00:13:31.790 ************************************ 00:13:31.790 16:22:51 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:13:31.790 * Looking for test storage... 00:13:31.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:31.790 16:22:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:31.790 16:22:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:31.790 16:22:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:31.790 16:22:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:31.790 16:22:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:31.790 16:22:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:31.790 16:22:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:31.790 16:22:51 -- scripts/common.sh@335 -- # IFS=.-: 00:13:31.790 16:22:51 -- scripts/common.sh@335 -- # read -ra ver1 00:13:31.790 16:22:51 -- scripts/common.sh@336 -- # IFS=.-: 00:13:31.791 16:22:51 -- scripts/common.sh@336 -- # read -ra ver2 00:13:31.791 16:22:51 -- scripts/common.sh@337 -- # local 'op=<' 00:13:31.791 16:22:51 -- scripts/common.sh@339 -- # ver1_l=2 00:13:31.791 16:22:51 -- scripts/common.sh@340 -- # ver2_l=1 00:13:31.791 16:22:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:31.791 16:22:51 -- scripts/common.sh@343 -- # case "$op" in 00:13:31.791 16:22:51 -- scripts/common.sh@344 -- # : 1 00:13:31.791 16:22:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:31.791 16:22:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:31.791 16:22:51 -- scripts/common.sh@364 -- # decimal 1 00:13:31.791 16:22:51 -- scripts/common.sh@352 -- # local d=1 00:13:31.791 16:22:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:31.791 16:22:51 -- scripts/common.sh@354 -- # echo 1 00:13:31.791 16:22:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:31.791 16:22:51 -- scripts/common.sh@365 -- # decimal 2 00:13:31.791 16:22:51 -- scripts/common.sh@352 -- # local d=2 00:13:31.791 16:22:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:31.791 16:22:51 -- scripts/common.sh@354 -- # echo 2 00:13:31.791 16:22:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:31.791 16:22:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:31.791 16:22:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:31.791 16:22:51 -- scripts/common.sh@367 -- # return 0 00:13:31.791 16:22:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:31.791 16:22:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:31.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.791 --rc genhtml_branch_coverage=1 00:13:31.791 --rc genhtml_function_coverage=1 00:13:31.791 --rc genhtml_legend=1 00:13:31.791 --rc geninfo_all_blocks=1 00:13:31.791 --rc geninfo_unexecuted_blocks=1 00:13:31.791 00:13:31.791 ' 00:13:31.791 16:22:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:31.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.791 --rc genhtml_branch_coverage=1 00:13:31.791 --rc genhtml_function_coverage=1 00:13:31.791 --rc genhtml_legend=1 00:13:31.791 --rc geninfo_all_blocks=1 00:13:31.791 --rc geninfo_unexecuted_blocks=1 00:13:31.791 00:13:31.791 ' 00:13:31.791 16:22:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:31.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.791 --rc genhtml_branch_coverage=1 00:13:31.791 --rc genhtml_function_coverage=1 00:13:31.791 --rc genhtml_legend=1 00:13:31.791 --rc geninfo_all_blocks=1 00:13:31.791 --rc geninfo_unexecuted_blocks=1 00:13:31.791 00:13:31.791 ' 00:13:31.791 16:22:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:31.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:31.791 --rc genhtml_branch_coverage=1 00:13:31.791 --rc genhtml_function_coverage=1 00:13:31.791 --rc genhtml_legend=1 00:13:31.791 --rc geninfo_all_blocks=1 00:13:31.791 --rc geninfo_unexecuted_blocks=1 00:13:31.791 00:13:31.791 ' 00:13:31.791 16:22:51 -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:31.791 16:22:51 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:31.791 16:22:51 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:31.791 16:22:51 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:31.791 16:22:51 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:31.791 16:22:51 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:31.791 16:22:51 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:31.791 16:22:51 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:31.791 16:22:51 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:31.791 16:22:51 -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:13:31.791 16:22:51 -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:13:31.791 16:22:51 -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:13:31.791 16:22:51 -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:13:31.791 16:22:51 -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:13:31.791 16:22:51 -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:13:31.791 16:22:51 -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:13:31.791 16:22:51 -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:13:31.791 16:22:51 -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:13:31.791 16:22:51 -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:13:31.791 16:22:51 -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:13:31.791 16:22:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:31.791 16:22:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:31.791 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:13:32.052 ************************************ 00:13:32.052 START TEST test_save_ublk_config 00:13:32.052 ************************************ 00:13:32.052 16:22:51 -- common/autotest_common.sh@1114 -- # test_save_config 00:13:32.052 16:22:51 -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:13:32.052 16:22:51 -- ublk/ublk.sh@103 -- # tgtpid=69192 00:13:32.052 16:22:51 -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:13:32.052 16:22:51 -- ublk/ublk.sh@106 -- # waitforlisten 69192 00:13:32.052 16:22:51 -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:13:32.052 16:22:51 -- common/autotest_common.sh@829 -- # '[' -z 69192 ']' 00:13:32.052 16:22:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.052 16:22:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.052 16:22:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.052 16:22:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.053 16:22:51 -- common/autotest_common.sh@10 -- # set +x 00:13:32.053 [2024-11-09 16:22:51.643614] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:32.053 [2024-11-09 16:22:51.643718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69192 ] 00:13:32.053 [2024-11-09 16:22:51.789429] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.314 [2024-11-09 16:22:52.024581] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:32.314 [2024-11-09 16:22:52.024818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.700 16:22:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:33.700 16:22:53 -- common/autotest_common.sh@862 -- # return 0 00:13:33.700 16:22:53 -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:13:33.700 16:22:53 -- ublk/ublk.sh@108 -- # rpc_cmd 00:13:33.700 16:22:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.700 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:13:33.700 [2024-11-09 16:22:53.169090] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:33.700 malloc0 00:13:33.700 [2024-11-09 16:22:53.240411] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:33.700 [2024-11-09 16:22:53.240515] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:33.700 [2024-11-09 16:22:53.240524] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:33.700 [2024-11-09 16:22:53.240535] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:33.700 [2024-11-09 16:22:53.248432] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:33.700 [2024-11-09 16:22:53.248464] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:33.700 [2024-11-09 16:22:53.256260] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:33.700 [2024-11-09 16:22:53.256405] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:33.700 [2024-11-09 16:22:53.273252] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:33.700 0 00:13:33.700 16:22:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.700 16:22:53 -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:13:33.700 16:22:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.700 16:22:53 -- common/autotest_common.sh@10 -- # set +x 00:13:33.961 16:22:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.961 16:22:53 -- ublk/ublk.sh@115 -- # config='{ 00:13:33.961 "subsystems": [ 00:13:33.961 { 00:13:33.961 "subsystem": "iobuf", 00:13:33.961 "config": [ 00:13:33.961 { 00:13:33.961 "method": "iobuf_set_options", 00:13:33.961 "params": { 00:13:33.961 "small_pool_count": 8192, 00:13:33.961 "large_pool_count": 1024, 00:13:33.961 "small_bufsize": 8192, 00:13:33.961 "large_bufsize": 135168 00:13:33.961 } 00:13:33.961 } 00:13:33.961 ] 00:13:33.961 }, 00:13:33.961 { 00:13:33.961 "subsystem": "sock", 00:13:33.961 "config": [ 00:13:33.961 { 00:13:33.961 "method": "sock_impl_set_options", 00:13:33.961 "params": { 00:13:33.961 "impl_name": "posix", 00:13:33.961 "recv_buf_size": 2097152, 00:13:33.961 "send_buf_size": 2097152, 00:13:33.961 "enable_recv_pipe": true, 00:13:33.961 "enable_quickack": false, 00:13:33.961 "enable_placement_id": 0, 00:13:33.961 "enable_zerocopy_send_server": true, 00:13:33.961 "enable_zerocopy_send_client": false, 00:13:33.961 "zerocopy_threshold": 0, 00:13:33.961 "tls_version": 0, 00:13:33.961 "enable_ktls": false 00:13:33.961 } 00:13:33.961 }, 00:13:33.961 { 00:13:33.961 "method": "sock_impl_set_options", 00:13:33.961 "params": { 00:13:33.961 "impl_name": "ssl", 00:13:33.961 "recv_buf_size": 4096, 00:13:33.961 "send_buf_size": 4096, 00:13:33.961 "enable_recv_pipe": true, 00:13:33.961 "enable_quickack": false, 00:13:33.961 "enable_placement_id": 0, 00:13:33.961 "enable_zerocopy_send_server": true, 00:13:33.961 "enable_zerocopy_send_client": false, 00:13:33.961 "zerocopy_threshold": 0, 00:13:33.961 "tls_version": 0, 00:13:33.961 "enable_ktls": false 00:13:33.961 } 00:13:33.961 } 00:13:33.961 ] 00:13:33.961 }, 00:13:33.961 { 00:13:33.961 "subsystem": "vmd", 00:13:33.961 "config": [] 00:13:33.961 }, 00:13:33.961 { 00:13:33.961 "subsystem": "accel", 00:13:33.961 "config": [ 00:13:33.961 { 00:13:33.961 "method": "accel_set_options", 00:13:33.961 "params": { 00:13:33.961 "small_cache_size": 128, 00:13:33.961 "large_cache_size": 16, 00:13:33.961 "task_count": 2048, 00:13:33.961 "sequence_count": 2048, 00:13:33.961 "buf_count": 2048 00:13:33.961 } 00:13:33.961 } 00:13:33.961 ] 00:13:33.961 }, 00:13:33.961 { 00:13:33.961 "subsystem": "bdev", 00:13:33.961 "config": [ 00:13:33.961 { 00:13:33.961 "method": "bdev_set_options", 00:13:33.961 "params": { 00:13:33.961 "bdev_io_pool_size": 65535, 00:13:33.961 "bdev_io_cache_size": 256, 00:13:33.961 "bdev_auto_examine": true, 00:13:33.961 "iobuf_small_cache_size": 128, 00:13:33.961 "iobuf_large_cache_size": 16 00:13:33.961 } 00:13:33.961 }, 00:13:33.961 { 00:13:33.961 "method": "bdev_raid_set_options", 00:13:33.961 "params": { 00:13:33.961 "process_window_size_kb": 1024 00:13:33.961 } 00:13:33.961 }, 00:13:33.961 { 00:13:33.961 "method": "bdev_iscsi_set_options", 00:13:33.961 "params": { 00:13:33.961 "timeout_sec": 30 00:13:33.961 } 00:13:33.961 }, 00:13:33.961 { 00:13:33.961 "method": "bdev_nvme_set_options", 00:13:33.961 "params": { 00:13:33.961 "action_on_timeout": "none", 00:13:33.961 "timeout_us": 0, 00:13:33.961 "timeout_admin_us": 0, 00:13:33.961 "keep_alive_timeout_ms": 10000, 00:13:33.961 "transport_retry_count": 4, 00:13:33.961 "arbitration_burst": 0, 00:13:33.961 "low_priority_weight": 0, 00:13:33.961 "medium_priority_weight": 0, 00:13:33.961 "high_priority_weight": 0, 00:13:33.961 "nvme_adminq_poll_period_us": 10000, 00:13:33.961 "nvme_ioq_poll_period_us": 0, 00:13:33.961 "io_queue_requests": 0, 00:13:33.961 "delay_cmd_submit": true, 00:13:33.961 "bdev_retry_count": 3, 00:13:33.961 "transport_ack_timeout": 0, 00:13:33.961 "ctrlr_loss_timeout_sec": 0, 00:13:33.962 "reconnect_delay_sec": 0, 00:13:33.962 "fast_io_fail_timeout_sec": 0, 00:13:33.962 "generate_uuids": false, 00:13:33.962 "transport_tos": 0, 00:13:33.962 "io_path_stat": false, 00:13:33.962 "allow_accel_sequence": false 00:13:33.962 } 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "method": "bdev_nvme_set_hotplug", 00:13:33.962 "params": { 00:13:33.962 "period_us": 100000, 00:13:33.962 "enable": false 00:13:33.962 } 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "method": "bdev_malloc_create", 00:13:33.962 "params": { 00:13:33.962 "name": "malloc0", 00:13:33.962 "num_blocks": 8192, 00:13:33.962 "block_size": 4096, 00:13:33.962 "physical_block_size": 4096, 00:13:33.962 "uuid": "30e7e93d-d033-496d-bd07-c7c5139dc8c4", 00:13:33.962 "optimal_io_boundary": 0 00:13:33.962 } 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "method": "bdev_wait_for_examine" 00:13:33.962 } 00:13:33.962 ] 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "subsystem": "scsi", 00:13:33.962 "config": null 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "subsystem": "scheduler", 00:13:33.962 "config": [ 00:13:33.962 { 00:13:33.962 "method": "framework_set_scheduler", 00:13:33.962 "params": { 00:13:33.962 "name": "static" 00:13:33.962 } 00:13:33.962 } 00:13:33.962 ] 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "subsystem": "vhost_scsi", 00:13:33.962 "config": [] 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "subsystem": "vhost_blk", 00:13:33.962 "config": [] 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "subsystem": "ublk", 00:13:33.962 "config": [ 00:13:33.962 { 00:13:33.962 "method": "ublk_create_target", 00:13:33.962 "params": { 00:13:33.962 "cpumask": "1" 00:13:33.962 } 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "method": "ublk_start_disk", 00:13:33.962 "params": { 00:13:33.962 "bdev_name": "malloc0", 00:13:33.962 "ublk_id": 0, 00:13:33.962 "num_queues": 1, 00:13:33.962 "queue_depth": 128 00:13:33.962 } 00:13:33.962 } 00:13:33.962 ] 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "subsystem": "nbd", 00:13:33.962 "config": [] 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "subsystem": "nvmf", 00:13:33.962 "config": [ 00:13:33.962 { 00:13:33.962 "method": "nvmf_set_config", 00:13:33.962 "params": { 00:13:33.962 "discovery_filter": "match_any", 00:13:33.962 "admin_cmd_passthru": { 00:13:33.962 "identify_ctrlr": false 00:13:33.962 } 00:13:33.962 } 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "method": "nvmf_set_max_subsystems", 00:13:33.962 "params": { 00:13:33.962 "max_subsystems": 1024 00:13:33.962 } 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "method": "nvmf_set_crdt", 00:13:33.962 "params": { 00:13:33.962 "crdt1": 0, 00:13:33.962 "crdt2": 0, 00:13:33.962 "crdt3": 0 00:13:33.962 } 00:13:33.962 } 00:13:33.962 ] 00:13:33.962 }, 00:13:33.962 { 00:13:33.962 "subsystem": "iscsi", 00:13:33.962 "config": [ 00:13:33.962 { 00:13:33.962 "method": "iscsi_set_options", 00:13:33.962 "params": { 00:13:33.962 "node_base": "iqn.2016-06.io.spdk", 00:13:33.962 "max_sessions": 128, 00:13:33.962 "max_connections_per_session": 2, 00:13:33.962 "max_queue_depth": 64, 00:13:33.962 "default_time2wait": 2, 00:13:33.962 "default_time2retain": 20, 00:13:33.962 "first_burst_length": 8192, 00:13:33.962 "immediate_data": true, 00:13:33.962 "allow_duplicated_isid": false, 00:13:33.962 "error_recovery_level": 0, 00:13:33.962 "nop_timeout": 60, 00:13:33.962 "nop_in_interval": 30, 00:13:33.962 "disable_chap": false, 00:13:33.962 "require_chap": false, 00:13:33.962 "mutual_chap": false, 00:13:33.962 "chap_group": 0, 00:13:33.962 "max_large_datain_per_connection": 64, 00:13:33.962 "max_r2t_per_connection": 4, 00:13:33.962 "pdu_pool_size": 36864, 00:13:33.962 "immediate_data_pool_size": 16384, 00:13:33.962 "data_out_pool_size": 2048 00:13:33.962 } 00:13:33.962 } 00:13:33.962 ] 00:13:33.962 } 00:13:33.962 ] 00:13:33.962 }' 00:13:33.962 16:22:53 -- ublk/ublk.sh@116 -- # killprocess 69192 00:13:33.962 16:22:53 -- common/autotest_common.sh@936 -- # '[' -z 69192 ']' 00:13:33.962 16:22:53 -- common/autotest_common.sh@940 -- # kill -0 69192 00:13:33.962 16:22:53 -- common/autotest_common.sh@941 -- # uname 00:13:33.962 16:22:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:33.962 16:22:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69192 00:13:33.962 16:22:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:33.962 killing process with pid 69192 00:13:33.962 16:22:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:33.962 16:22:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69192' 00:13:33.962 16:22:53 -- common/autotest_common.sh@955 -- # kill 69192 00:13:33.962 16:22:53 -- common/autotest_common.sh@960 -- # wait 69192 00:13:34.901 [2024-11-09 16:22:54.627345] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:34.901 [2024-11-09 16:22:54.664256] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:34.901 [2024-11-09 16:22:54.664353] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:35.159 [2024-11-09 16:22:54.672253] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:35.159 [2024-11-09 16:22:54.672295] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:35.159 [2024-11-09 16:22:54.672306] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:35.159 [2024-11-09 16:22:54.672327] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:35.159 [2024-11-09 16:22:54.672430] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:36.096 16:22:55 -- ublk/ublk.sh@119 -- # tgtpid=69254 00:13:36.096 16:22:55 -- ublk/ublk.sh@121 -- # waitforlisten 69254 00:13:36.096 16:22:55 -- common/autotest_common.sh@829 -- # '[' -z 69254 ']' 00:13:36.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:36.096 16:22:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:36.096 16:22:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:36.096 16:22:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:36.096 16:22:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:36.096 16:22:55 -- common/autotest_common.sh@10 -- # set +x 00:13:36.096 16:22:55 -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:13:36.096 16:22:55 -- ublk/ublk.sh@118 -- # echo '{ 00:13:36.096 "subsystems": [ 00:13:36.096 { 00:13:36.096 "subsystem": "iobuf", 00:13:36.096 "config": [ 00:13:36.096 { 00:13:36.096 "method": "iobuf_set_options", 00:13:36.096 "params": { 00:13:36.096 "small_pool_count": 8192, 00:13:36.096 "large_pool_count": 1024, 00:13:36.096 "small_bufsize": 8192, 00:13:36.096 "large_bufsize": 135168 00:13:36.096 } 00:13:36.096 } 00:13:36.096 ] 00:13:36.096 }, 00:13:36.096 { 00:13:36.096 "subsystem": "sock", 00:13:36.096 "config": [ 00:13:36.096 { 00:13:36.096 "method": "sock_impl_set_options", 00:13:36.096 "params": { 00:13:36.096 "impl_name": "posix", 00:13:36.096 "recv_buf_size": 2097152, 00:13:36.096 "send_buf_size": 2097152, 00:13:36.096 "enable_recv_pipe": true, 00:13:36.096 "enable_quickack": false, 00:13:36.096 "enable_placement_id": 0, 00:13:36.096 "enable_zerocopy_send_server": true, 00:13:36.097 "enable_zerocopy_send_client": false, 00:13:36.097 "zerocopy_threshold": 0, 00:13:36.097 "tls_version": 0, 00:13:36.097 "enable_ktls": false 00:13:36.097 } 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "method": "sock_impl_set_options", 00:13:36.097 "params": { 00:13:36.097 "impl_name": "ssl", 00:13:36.097 "recv_buf_size": 4096, 00:13:36.097 "send_buf_size": 4096, 00:13:36.097 "enable_recv_pipe": true, 00:13:36.097 "enable_quickack": false, 00:13:36.097 "enable_placement_id": 0, 00:13:36.097 "enable_zerocopy_send_server": true, 00:13:36.097 "enable_zerocopy_send_client": false, 00:13:36.097 "zerocopy_threshold": 0, 00:13:36.097 "tls_version": 0, 00:13:36.097 "enable_ktls": false 00:13:36.097 } 00:13:36.097 } 00:13:36.097 ] 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "subsystem": "vmd", 00:13:36.097 "config": [] 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "subsystem": "accel", 00:13:36.097 "config": [ 00:13:36.097 { 00:13:36.097 "method": "accel_set_options", 00:13:36.097 "params": { 00:13:36.097 "small_cache_size": 128, 00:13:36.097 "large_cache_size": 16, 00:13:36.097 "task_count": 2048, 00:13:36.097 "sequence_count": 2048, 00:13:36.097 "buf_count": 2048 00:13:36.097 } 00:13:36.097 } 00:13:36.097 ] 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "subsystem": "bdev", 00:13:36.097 "config": [ 00:13:36.097 { 00:13:36.097 "method": "bdev_set_options", 00:13:36.097 "params": { 00:13:36.097 "bdev_io_pool_size": 65535, 00:13:36.097 "bdev_io_cache_size": 256, 00:13:36.097 "bdev_auto_examine": true, 00:13:36.097 "iobuf_small_cache_size": 128, 00:13:36.097 "iobuf_large_cache_size": 16 00:13:36.097 } 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "method": "bdev_raid_set_options", 00:13:36.097 "params": { 00:13:36.097 "process_window_size_kb": 1024 00:13:36.097 } 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "method": "bdev_iscsi_set_options", 00:13:36.097 "params": { 00:13:36.097 "timeout_sec": 30 00:13:36.097 } 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "method": "bdev_nvme_set_options", 00:13:36.097 "params": { 00:13:36.097 "action_on_timeout": "none", 00:13:36.097 "timeout_us": 0, 00:13:36.097 "timeout_admin_us": 0, 00:13:36.097 "keep_alive_timeout_ms": 10000, 00:13:36.097 "transport_retry_count": 4, 00:13:36.097 "arbitration_burst": 0, 00:13:36.097 "low_priority_weight": 0, 00:13:36.097 "medium_priority_weight": 0, 00:13:36.097 "high_priority_weight": 0, 00:13:36.097 "nvme_adminq_poll_period_us": 10000, 00:13:36.097 "nvme_ioq_poll_period_us": 0, 00:13:36.097 "io_queue_requests": 0, 00:13:36.097 "delay_cmd_submit": true, 00:13:36.097 "bdev_retry_count": 3, 00:13:36.097 "transport_ack_timeout": 0, 00:13:36.097 "ctrlr_loss_timeout_sec": 0, 00:13:36.097 "reconnect_delay_sec": 0, 00:13:36.097 "fast_io_fail_timeout_sec": 0, 00:13:36.097 "generate_uuids": false, 00:13:36.097 "transport_tos": 0, 00:13:36.097 "io_path_stat": false, 00:13:36.097 "allow_accel_sequence": false 00:13:36.097 } 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "method": "bdev_nvme_set_hotplug", 00:13:36.097 "params": { 00:13:36.097 "period_us": 100000, 00:13:36.097 "enable": false 00:13:36.097 } 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "method": "bdev_malloc_create", 00:13:36.097 "params": { 00:13:36.097 "name": "malloc0", 00:13:36.097 "num_blocks": 8192, 00:13:36.097 "block_size": 4096, 00:13:36.097 "physical_block_size": 4096, 00:13:36.097 "uuid": "30e7e93d-d033-496d-bd07-c7c5139dc8c4", 00:13:36.097 "optimal_io_boundary": 0 00:13:36.097 } 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "method": "bdev_wait_for_examine" 00:13:36.097 } 00:13:36.097 ] 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "subsystem": "scsi", 00:13:36.097 "config": null 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "subsystem": "scheduler", 00:13:36.097 "config": [ 00:13:36.097 { 00:13:36.097 "method": "framework_set_scheduler", 00:13:36.097 "params": { 00:13:36.097 "name": "static" 00:13:36.097 } 00:13:36.097 } 00:13:36.097 ] 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "subsystem": "vhost_scsi", 00:13:36.097 "config": [] 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "subsystem": "vhost_blk", 00:13:36.097 "config": [] 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "subsystem": "ublk", 00:13:36.097 "config": [ 00:13:36.097 { 00:13:36.097 "method": "ublk_create_target", 00:13:36.097 "params": { 00:13:36.097 "cpumask": "1" 00:13:36.097 } 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "method": "ublk_start_disk", 00:13:36.097 "params": { 00:13:36.097 "bdev_name": "malloc0", 00:13:36.097 "ublk_id": 0, 00:13:36.097 "num_queues": 1, 00:13:36.097 "queue_depth": 128 00:13:36.097 } 00:13:36.097 } 00:13:36.097 ] 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "subsystem": "nbd", 00:13:36.097 "config": [] 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "subsystem": "nvmf", 00:13:36.097 "config": [ 00:13:36.097 { 00:13:36.097 "method": "nvmf_set_config", 00:13:36.097 "params": { 00:13:36.097 "discovery_filter": "match_any", 00:13:36.097 "admin_cmd_passthru": { 00:13:36.097 "identify_ctrlr": false 00:13:36.097 } 00:13:36.097 } 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "method": "nvmf_set_max_subsystems", 00:13:36.097 "params": { 00:13:36.097 "max_subsystems": 1024 00:13:36.097 } 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "method": "nvmf_set_crdt", 00:13:36.097 "params": { 00:13:36.097 "crdt1": 0, 00:13:36.097 "crdt2": 0, 00:13:36.097 "crdt3": 0 00:13:36.097 } 00:13:36.097 } 00:13:36.097 ] 00:13:36.097 }, 00:13:36.097 { 00:13:36.097 "subsystem": "iscsi", 00:13:36.097 "config": [ 00:13:36.097 { 00:13:36.097 "method": "iscsi_set_options", 00:13:36.097 "params": { 00:13:36.097 "node_base": "iqn.2016-06.io.spdk", 00:13:36.097 "max_sessions": 128, 00:13:36.097 "max_connections_per_session": 2, 00:13:36.097 "max_queue_depth": 64, 00:13:36.097 "default_time2wait": 2, 00:13:36.097 "default_time2retain": 20, 00:13:36.097 "first_burst_length": 8192, 00:13:36.097 "immediate_data": true, 00:13:36.097 "allow_duplicated_isid": false, 00:13:36.097 "error_recovery_level": 0, 00:13:36.097 "nop_timeout": 60, 00:13:36.097 "nop_in_interval": 30, 00:13:36.097 "disable_chap": false, 00:13:36.097 "require_chap": false, 00:13:36.097 "mutual_chap": false, 00:13:36.097 "chap_group": 0, 00:13:36.097 "max_large_datain_per_connection": 64, 00:13:36.097 "max_r2t_per_connection": 4, 00:13:36.097 "pdu_pool_size": 36864, 00:13:36.097 "immediate_data_pool_size": 16384, 00:13:36.097 "data_out_pool_size": 2048 00:13:36.097 } 00:13:36.097 } 00:13:36.097 ] 00:13:36.097 } 00:13:36.097 ] 00:13:36.097 }' 00:13:36.394 [2024-11-09 16:22:55.927367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:36.394 [2024-11-09 16:22:55.927487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69254 ] 00:13:36.394 [2024-11-09 16:22:56.074315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.652 [2024-11-09 16:22:56.248048] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:36.652 [2024-11-09 16:22:56.248200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.218 [2024-11-09 16:22:56.835825] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:37.218 [2024-11-09 16:22:56.843321] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:13:37.218 [2024-11-09 16:22:56.843377] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:13:37.218 [2024-11-09 16:22:56.843383] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:37.218 [2024-11-09 16:22:56.843388] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:37.218 [2024-11-09 16:22:56.851333] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:37.218 [2024-11-09 16:22:56.851351] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:37.218 [2024-11-09 16:22:56.859244] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:37.218 [2024-11-09 16:22:56.859315] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:37.218 [2024-11-09 16:22:56.876241] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:37.784 16:22:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:37.784 16:22:57 -- common/autotest_common.sh@862 -- # return 0 00:13:37.784 16:22:57 -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:13:37.784 16:22:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.784 16:22:57 -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:13:37.784 16:22:57 -- common/autotest_common.sh@10 -- # set +x 00:13:37.784 16:22:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.784 16:22:57 -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:37.784 16:22:57 -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:13:37.784 16:22:57 -- ublk/ublk.sh@125 -- # killprocess 69254 00:13:37.784 16:22:57 -- common/autotest_common.sh@936 -- # '[' -z 69254 ']' 00:13:37.784 16:22:57 -- common/autotest_common.sh@940 -- # kill -0 69254 00:13:37.784 16:22:57 -- common/autotest_common.sh@941 -- # uname 00:13:37.784 16:22:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:37.784 16:22:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69254 00:13:37.784 16:22:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:37.784 16:22:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:37.784 killing process with pid 69254 00:13:37.784 16:22:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69254' 00:13:37.785 16:22:57 -- common/autotest_common.sh@955 -- # kill 69254 00:13:37.785 16:22:57 -- common/autotest_common.sh@960 -- # wait 69254 00:13:38.720 [2024-11-09 16:22:58.212220] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:38.720 [2024-11-09 16:22:58.244308] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:38.720 [2024-11-09 16:22:58.244404] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:38.720 [2024-11-09 16:22:58.251324] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:38.720 [2024-11-09 16:22:58.251362] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:38.720 [2024-11-09 16:22:58.251368] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:38.720 [2024-11-09 16:22:58.251386] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:38.720 [2024-11-09 16:22:58.251494] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:40.097 16:22:59 -- ublk/ublk.sh@126 -- # trap - EXIT 00:13:40.097 00:13:40.097 real 0m7.877s 00:13:40.097 user 0m5.884s 00:13:40.097 sys 0m2.937s 00:13:40.097 16:22:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:40.097 16:22:59 -- common/autotest_common.sh@10 -- # set +x 00:13:40.097 ************************************ 00:13:40.097 END TEST test_save_ublk_config 00:13:40.097 ************************************ 00:13:40.097 16:22:59 -- ublk/ublk.sh@139 -- # spdk_pid=69329 00:13:40.097 16:22:59 -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:40.097 16:22:59 -- ublk/ublk.sh@141 -- # waitforlisten 69329 00:13:40.097 16:22:59 -- common/autotest_common.sh@829 -- # '[' -z 69329 ']' 00:13:40.097 16:22:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.097 16:22:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:40.097 16:22:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.098 16:22:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:40.098 16:22:59 -- common/autotest_common.sh@10 -- # set +x 00:13:40.098 16:22:59 -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:40.098 [2024-11-09 16:22:59.556100] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:40.098 [2024-11-09 16:22:59.556619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69329 ] 00:13:40.098 [2024-11-09 16:22:59.707453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:40.359 [2024-11-09 16:22:59.932623] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:40.359 [2024-11-09 16:22:59.933039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.359 [2024-11-09 16:22:59.933145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.744 16:23:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:41.744 16:23:01 -- common/autotest_common.sh@862 -- # return 0 00:13:41.744 16:23:01 -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:13:41.744 16:23:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:41.744 16:23:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:41.744 16:23:01 -- common/autotest_common.sh@10 -- # set +x 00:13:41.744 ************************************ 00:13:41.744 START TEST test_create_ublk 00:13:41.744 ************************************ 00:13:41.744 16:23:01 -- common/autotest_common.sh@1114 -- # test_create_ublk 00:13:41.744 16:23:01 -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:13:41.744 16:23:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.744 16:23:01 -- common/autotest_common.sh@10 -- # set +x 00:13:41.744 [2024-11-09 16:23:01.110516] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:41.744 16:23:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.744 16:23:01 -- ublk/ublk.sh@33 -- # ublk_target= 00:13:41.744 16:23:01 -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:13:41.744 16:23:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.744 16:23:01 -- common/autotest_common.sh@10 -- # set +x 00:13:41.744 16:23:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.744 16:23:01 -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:13:41.744 16:23:01 -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:41.744 16:23:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.744 16:23:01 -- common/autotest_common.sh@10 -- # set +x 00:13:41.744 [2024-11-09 16:23:01.329367] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:41.744 [2024-11-09 16:23:01.329696] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:41.745 [2024-11-09 16:23:01.329708] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:41.745 [2024-11-09 16:23:01.329716] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:41.745 [2024-11-09 16:23:01.337266] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:41.745 [2024-11-09 16:23:01.337290] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:41.745 [2024-11-09 16:23:01.345255] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:41.745 [2024-11-09 16:23:01.359406] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:41.745 [2024-11-09 16:23:01.374239] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:41.745 16:23:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.745 16:23:01 -- ublk/ublk.sh@37 -- # ublk_id=0 00:13:41.745 16:23:01 -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:13:41.745 16:23:01 -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:13:41.745 16:23:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.745 16:23:01 -- common/autotest_common.sh@10 -- # set +x 00:13:41.745 16:23:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.745 16:23:01 -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:13:41.745 { 00:13:41.745 "ublk_device": "/dev/ublkb0", 00:13:41.745 "id": 0, 00:13:41.745 "queue_depth": 512, 00:13:41.745 "num_queues": 4, 00:13:41.745 "bdev_name": "Malloc0" 00:13:41.745 } 00:13:41.745 ]' 00:13:41.745 16:23:01 -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:13:41.745 16:23:01 -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:41.745 16:23:01 -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:13:41.745 16:23:01 -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:13:41.745 16:23:01 -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:13:41.745 16:23:01 -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:13:41.745 16:23:01 -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:13:42.004 16:23:01 -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:13:42.004 16:23:01 -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:13:42.004 16:23:01 -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:42.004 16:23:01 -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:13:42.004 16:23:01 -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:13:42.004 16:23:01 -- lvol/common.sh@41 -- # local offset=0 00:13:42.004 16:23:01 -- lvol/common.sh@42 -- # local size=134217728 00:13:42.004 16:23:01 -- lvol/common.sh@43 -- # local rw=write 00:13:42.004 16:23:01 -- lvol/common.sh@44 -- # local pattern=0xcc 00:13:42.004 16:23:01 -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:13:42.004 16:23:01 -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:13:42.004 16:23:01 -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:13:42.004 16:23:01 -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:42.004 16:23:01 -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:13:42.004 16:23:01 -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:13:42.004 fio: verification read phase will never start because write phase uses all of runtime 00:13:42.004 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:13:42.004 fio-3.35 00:13:42.004 Starting 1 process 00:13:52.078 00:13:52.078 fio_test: (groupid=0, jobs=1): err= 0: pid=69383: Sat Nov 9 16:23:11 2024 00:13:52.078 write: IOPS=15.1k, BW=59.1MiB/s (62.0MB/s)(591MiB/10001msec); 0 zone resets 00:13:52.078 clat (usec): min=37, max=4108, avg=65.36, stdev=92.48 00:13:52.078 lat (usec): min=38, max=4109, avg=65.77, stdev=92.49 00:13:52.078 clat percentiles (usec): 00:13:52.078 | 1.00th=[ 49], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 57], 00:13:52.078 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 62], 60.00th=[ 63], 00:13:52.078 | 70.00th=[ 65], 80.00th=[ 67], 90.00th=[ 70], 95.00th=[ 74], 00:13:52.078 | 99.00th=[ 85], 99.50th=[ 94], 99.90th=[ 1827], 99.95th=[ 2704], 00:13:52.078 | 99.99th=[ 3523] 00:13:52.078 bw ( KiB/s): min=56224, max=64344, per=100.00%, avg=60591.16, stdev=1861.68, samples=19 00:13:52.078 iops : min=14056, max=16086, avg=15147.79, stdev=465.42, samples=19 00:13:52.078 lat (usec) : 50=2.00%, 100=97.58%, 250=0.22%, 500=0.02%, 750=0.01% 00:13:52.078 lat (usec) : 1000=0.02% 00:13:52.078 lat (msec) : 2=0.06%, 4=0.09%, 10=0.01% 00:13:52.078 cpu : usr=2.45%, sys=12.01%, ctx=151329, majf=0, minf=796 00:13:52.078 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:52.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.078 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.078 issued rwts: total=0,151328,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.078 latency : target=0, window=0, percentile=100.00%, depth=1 00:13:52.078 00:13:52.078 Run status group 0 (all jobs): 00:13:52.078 WRITE: bw=59.1MiB/s (62.0MB/s), 59.1MiB/s-59.1MiB/s (62.0MB/s-62.0MB/s), io=591MiB (620MB), run=10001-10001msec 00:13:52.078 00:13:52.078 Disk stats (read/write): 00:13:52.079 ublkb0: ios=0/149778, merge=0/0, ticks=0/8350, in_queue=8351, util=99.01% 00:13:52.079 16:23:11 -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:13:52.079 16:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.079 16:23:11 -- common/autotest_common.sh@10 -- # set +x 00:13:52.079 [2024-11-09 16:23:11.777720] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:52.079 [2024-11-09 16:23:11.823650] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:52.079 [2024-11-09 16:23:11.824727] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:52.079 [2024-11-09 16:23:11.832249] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:52.079 [2024-11-09 16:23:11.832495] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:52.079 [2024-11-09 16:23:11.832504] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:52.079 16:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.079 16:23:11 -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:13:52.079 16:23:11 -- common/autotest_common.sh@650 -- # local es=0 00:13:52.079 16:23:11 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:13:52.079 16:23:11 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:52.079 16:23:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.079 16:23:11 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:52.079 16:23:11 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:52.079 16:23:11 -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:13:52.079 16:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.079 16:23:11 -- common/autotest_common.sh@10 -- # set +x 00:13:52.337 [2024-11-09 16:23:11.848323] ublk.c:1049:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:13:52.337 request: 00:13:52.337 { 00:13:52.337 "ublk_id": 0, 00:13:52.337 "method": "ublk_stop_disk", 00:13:52.337 "req_id": 1 00:13:52.337 } 00:13:52.337 Got JSON-RPC error response 00:13:52.337 response: 00:13:52.337 { 00:13:52.337 "code": -19, 00:13:52.337 "message": "No such device" 00:13:52.337 } 00:13:52.337 16:23:11 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:52.337 16:23:11 -- common/autotest_common.sh@653 -- # es=1 00:13:52.337 16:23:11 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:52.337 16:23:11 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:52.337 16:23:11 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:52.337 16:23:11 -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:13:52.337 16:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.337 16:23:11 -- common/autotest_common.sh@10 -- # set +x 00:13:52.337 [2024-11-09 16:23:11.864299] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:52.337 [2024-11-09 16:23:11.872241] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:52.337 [2024-11-09 16:23:11.872267] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:52.337 16:23:11 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.337 16:23:11 -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:52.337 16:23:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.337 16:23:11 -- common/autotest_common.sh@10 -- # set +x 00:13:52.596 16:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.596 16:23:12 -- ublk/ublk.sh@57 -- # check_leftover_devices 00:13:52.596 16:23:12 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:52.596 16:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.596 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:52.596 16:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.596 16:23:12 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:52.596 16:23:12 -- lvol/common.sh@26 -- # jq length 00:13:52.596 16:23:12 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:52.596 16:23:12 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:52.596 16:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.596 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:52.596 16:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.596 16:23:12 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:52.596 16:23:12 -- lvol/common.sh@28 -- # jq length 00:13:52.596 16:23:12 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:52.596 00:13:52.596 real 0m11.223s 00:13:52.596 user 0m0.541s 00:13:52.596 sys 0m1.267s 00:13:52.596 ************************************ 00:13:52.596 END TEST test_create_ublk 00:13:52.596 ************************************ 00:13:52.596 16:23:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:52.596 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:52.596 16:23:12 -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:13:52.596 16:23:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:52.596 16:23:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:52.596 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:52.596 ************************************ 00:13:52.596 START TEST test_create_multi_ublk 00:13:52.596 ************************************ 00:13:52.596 16:23:12 -- common/autotest_common.sh@1114 -- # test_create_multi_ublk 00:13:52.596 16:23:12 -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:13:52.596 16:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.596 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:52.854 [2024-11-09 16:23:12.373728] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:52.854 16:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.854 16:23:12 -- ublk/ublk.sh@62 -- # ublk_target= 00:13:52.854 16:23:12 -- ublk/ublk.sh@64 -- # seq 0 3 00:13:52.854 16:23:12 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:52.854 16:23:12 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:13:52.854 16:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.854 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:52.854 16:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.854 16:23:12 -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:13:52.854 16:23:12 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:13:52.854 16:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.854 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:52.854 [2024-11-09 16:23:12.600341] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:13:52.854 [2024-11-09 16:23:12.600650] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:13:52.854 [2024-11-09 16:23:12.600665] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:13:52.854 [2024-11-09 16:23:12.600671] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:13:52.854 [2024-11-09 16:23:12.612469] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:52.854 [2024-11-09 16:23:12.612491] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:53.113 [2024-11-09 16:23:12.632250] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:53.113 [2024-11-09 16:23:12.632749] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:13:53.113 [2024-11-09 16:23:12.659242] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:13:53.113 16:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.113 16:23:12 -- ublk/ublk.sh@68 -- # ublk_id=0 00:13:53.113 16:23:12 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:53.113 16:23:12 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:13:53.113 16:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.113 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:53.113 16:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.113 16:23:12 -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:13:53.113 16:23:12 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:13:53.113 16:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.113 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:53.371 [2024-11-09 16:23:12.883330] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:13:53.371 [2024-11-09 16:23:12.883634] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:13:53.371 [2024-11-09 16:23:12.883643] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:53.371 [2024-11-09 16:23:12.883647] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:53.371 [2024-11-09 16:23:12.891273] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:53.371 [2024-11-09 16:23:12.891289] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:53.371 [2024-11-09 16:23:12.899258] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:53.371 [2024-11-09 16:23:12.899749] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:53.371 [2024-11-09 16:23:12.923252] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:53.371 16:23:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.371 16:23:12 -- ublk/ublk.sh@68 -- # ublk_id=1 00:13:53.371 16:23:12 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:53.371 16:23:12 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:13:53.371 16:23:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.371 16:23:12 -- common/autotest_common.sh@10 -- # set +x 00:13:53.371 16:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.371 16:23:13 -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:13:53.371 16:23:13 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:13:53.371 16:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.371 16:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:53.371 [2024-11-09 16:23:13.091359] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:13:53.371 [2024-11-09 16:23:13.091651] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:13:53.371 [2024-11-09 16:23:13.091658] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:13:53.371 [2024-11-09 16:23:13.091665] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:13:53.371 [2024-11-09 16:23:13.099266] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:53.371 [2024-11-09 16:23:13.099285] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:53.371 [2024-11-09 16:23:13.107245] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:53.371 [2024-11-09 16:23:13.107744] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:13:53.371 [2024-11-09 16:23:13.117266] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:13:53.371 16:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.371 16:23:13 -- ublk/ublk.sh@68 -- # ublk_id=2 00:13:53.371 16:23:13 -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:53.371 16:23:13 -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:13:53.371 16:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.371 16:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 16:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.630 16:23:13 -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:13:53.630 16:23:13 -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:13:53.630 16:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.630 16:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 [2024-11-09 16:23:13.276343] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:13:53.630 [2024-11-09 16:23:13.276637] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:13:53.630 [2024-11-09 16:23:13.276650] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:13:53.630 [2024-11-09 16:23:13.276655] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:13:53.630 [2024-11-09 16:23:13.284265] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:53.630 [2024-11-09 16:23:13.284281] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:53.630 [2024-11-09 16:23:13.292249] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:53.630 [2024-11-09 16:23:13.292736] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:13:53.630 [2024-11-09 16:23:13.301274] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:13:53.630 16:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.630 16:23:13 -- ublk/ublk.sh@68 -- # ublk_id=3 00:13:53.630 16:23:13 -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:13:53.630 16:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.630 16:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:53.630 16:23:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.630 16:23:13 -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:13:53.630 { 00:13:53.630 "ublk_device": "/dev/ublkb0", 00:13:53.630 "id": 0, 00:13:53.630 "queue_depth": 512, 00:13:53.630 "num_queues": 4, 00:13:53.630 "bdev_name": "Malloc0" 00:13:53.630 }, 00:13:53.630 { 00:13:53.630 "ublk_device": "/dev/ublkb1", 00:13:53.630 "id": 1, 00:13:53.630 "queue_depth": 512, 00:13:53.630 "num_queues": 4, 00:13:53.630 "bdev_name": "Malloc1" 00:13:53.630 }, 00:13:53.630 { 00:13:53.630 "ublk_device": "/dev/ublkb2", 00:13:53.630 "id": 2, 00:13:53.630 "queue_depth": 512, 00:13:53.630 "num_queues": 4, 00:13:53.630 "bdev_name": "Malloc2" 00:13:53.630 }, 00:13:53.630 { 00:13:53.630 "ublk_device": "/dev/ublkb3", 00:13:53.630 "id": 3, 00:13:53.630 "queue_depth": 512, 00:13:53.630 "num_queues": 4, 00:13:53.630 "bdev_name": "Malloc3" 00:13:53.630 } 00:13:53.630 ]' 00:13:53.631 16:23:13 -- ublk/ublk.sh@72 -- # seq 0 3 00:13:53.631 16:23:13 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:53.631 16:23:13 -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:13:53.631 16:23:13 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:13:53.631 16:23:13 -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:13:53.631 16:23:13 -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:13:53.631 16:23:13 -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:13:53.889 16:23:13 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:53.889 16:23:13 -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:13:53.889 16:23:13 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:53.889 16:23:13 -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:13:53.889 16:23:13 -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:13:53.889 16:23:13 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:53.889 16:23:13 -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:13:53.889 16:23:13 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:13:53.889 16:23:13 -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:13:53.889 16:23:13 -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:13:53.889 16:23:13 -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:13:53.889 16:23:13 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:53.889 16:23:13 -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:13:53.889 16:23:13 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:53.889 16:23:13 -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:13:53.889 16:23:13 -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:13:53.889 16:23:13 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:53.889 16:23:13 -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:13:54.148 16:23:13 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:13:54.148 16:23:13 -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:13:54.148 16:23:13 -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:13:54.148 16:23:13 -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:13:54.148 16:23:13 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:54.148 16:23:13 -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:13:54.148 16:23:13 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:54.148 16:23:13 -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:13:54.148 16:23:13 -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:13:54.148 16:23:13 -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:54.148 16:23:13 -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:13:54.148 16:23:13 -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:13:54.148 16:23:13 -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:13:54.148 16:23:13 -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:13:54.148 16:23:13 -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:13:54.148 16:23:13 -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:13:54.148 16:23:13 -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:13:54.407 16:23:13 -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:13:54.407 16:23:13 -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:13:54.407 16:23:13 -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:13:54.407 16:23:13 -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:13:54.407 16:23:13 -- ublk/ublk.sh@85 -- # seq 0 3 00:13:54.407 16:23:13 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:54.407 16:23:13 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:13:54.407 16:23:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.407 16:23:13 -- common/autotest_common.sh@10 -- # set +x 00:13:54.407 [2024-11-09 16:23:13.972317] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:13:54.407 [2024-11-09 16:23:14.004290] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:54.407 [2024-11-09 16:23:14.005094] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:13:54.407 [2024-11-09 16:23:14.011277] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:54.407 [2024-11-09 16:23:14.011521] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:13:54.407 [2024-11-09 16:23:14.011535] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:13:54.407 16:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.407 16:23:14 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:54.407 16:23:14 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:13:54.407 16:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.407 16:23:14 -- common/autotest_common.sh@10 -- # set +x 00:13:54.407 [2024-11-09 16:23:14.028306] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:13:54.407 [2024-11-09 16:23:14.068291] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:54.407 [2024-11-09 16:23:14.069064] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:13:54.407 [2024-11-09 16:23:14.076254] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:54.407 [2024-11-09 16:23:14.076480] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:13:54.407 [2024-11-09 16:23:14.076493] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:13:54.407 16:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.407 16:23:14 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:54.407 16:23:14 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:13:54.407 16:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.407 16:23:14 -- common/autotest_common.sh@10 -- # set +x 00:13:54.407 [2024-11-09 16:23:14.092302] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:13:54.407 [2024-11-09 16:23:14.124294] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:54.407 [2024-11-09 16:23:14.125027] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:13:54.407 [2024-11-09 16:23:14.132328] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:54.407 [2024-11-09 16:23:14.132565] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:13:54.407 [2024-11-09 16:23:14.132579] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:13:54.407 16:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.407 16:23:14 -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:54.407 16:23:14 -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:13:54.407 16:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.407 16:23:14 -- common/autotest_common.sh@10 -- # set +x 00:13:54.407 [2024-11-09 16:23:14.148322] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:13:54.666 [2024-11-09 16:23:14.184284] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:13:54.666 [2024-11-09 16:23:14.184970] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:13:54.666 [2024-11-09 16:23:14.188414] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:13:54.666 [2024-11-09 16:23:14.188640] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:13:54.666 [2024-11-09 16:23:14.188652] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:13:54.666 16:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.666 16:23:14 -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:13:54.666 [2024-11-09 16:23:14.372309] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:54.666 [2024-11-09 16:23:14.380240] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:54.666 [2024-11-09 16:23:14.380268] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:13:54.666 16:23:14 -- ublk/ublk.sh@93 -- # seq 0 3 00:13:54.666 16:23:14 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:54.666 16:23:14 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:13:54.666 16:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.666 16:23:14 -- common/autotest_common.sh@10 -- # set +x 00:13:55.233 16:23:14 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.233 16:23:14 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.233 16:23:14 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:13:55.233 16:23:14 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.233 16:23:14 -- common/autotest_common.sh@10 -- # set +x 00:13:55.491 16:23:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.491 16:23:15 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.491 16:23:15 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:13:55.492 16:23:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.492 16:23:15 -- common/autotest_common.sh@10 -- # set +x 00:13:55.750 16:23:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.750 16:23:15 -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:13:55.750 16:23:15 -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:13:55.750 16:23:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.750 16:23:15 -- common/autotest_common.sh@10 -- # set +x 00:13:56.317 16:23:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.317 16:23:15 -- ublk/ublk.sh@96 -- # check_leftover_devices 00:13:56.317 16:23:15 -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:13:56.317 16:23:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.317 16:23:15 -- common/autotest_common.sh@10 -- # set +x 00:13:56.317 16:23:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.317 16:23:15 -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:13:56.317 16:23:15 -- lvol/common.sh@26 -- # jq length 00:13:56.317 16:23:15 -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:13:56.317 16:23:15 -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:13:56.317 16:23:15 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.317 16:23:15 -- common/autotest_common.sh@10 -- # set +x 00:13:56.317 16:23:15 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.317 16:23:15 -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:13:56.317 16:23:15 -- lvol/common.sh@28 -- # jq length 00:13:56.317 ************************************ 00:13:56.317 END TEST test_create_multi_ublk 00:13:56.317 ************************************ 00:13:56.317 16:23:15 -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:13:56.317 00:13:56.317 real 0m3.527s 00:13:56.317 user 0m0.793s 00:13:56.317 sys 0m0.151s 00:13:56.317 16:23:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:56.317 16:23:15 -- common/autotest_common.sh@10 -- # set +x 00:13:56.317 16:23:15 -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:13:56.318 16:23:15 -- ublk/ublk.sh@147 -- # cleanup 00:13:56.318 16:23:15 -- ublk/ublk.sh@130 -- # killprocess 69329 00:13:56.318 16:23:15 -- common/autotest_common.sh@936 -- # '[' -z 69329 ']' 00:13:56.318 16:23:15 -- common/autotest_common.sh@940 -- # kill -0 69329 00:13:56.318 16:23:15 -- common/autotest_common.sh@941 -- # uname 00:13:56.318 16:23:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:56.318 16:23:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69329 00:13:56.318 killing process with pid 69329 00:13:56.318 16:23:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:56.318 16:23:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:56.318 16:23:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69329' 00:13:56.318 16:23:15 -- common/autotest_common.sh@955 -- # kill 69329 00:13:56.318 16:23:15 -- common/autotest_common.sh@960 -- # wait 69329 00:13:56.885 [2024-11-09 16:23:16.475848] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:13:56.885 [2024-11-09 16:23:16.476054] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:13:57.453 00:13:57.453 real 0m25.756s 00:13:57.453 user 0m36.857s 00:13:57.453 sys 0m9.755s 00:13:57.453 16:23:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:13:57.453 ************************************ 00:13:57.453 END TEST ublk 00:13:57.453 ************************************ 00:13:57.453 16:23:17 -- common/autotest_common.sh@10 -- # set +x 00:13:57.453 16:23:17 -- spdk/autotest.sh@247 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:57.453 16:23:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:13:57.453 16:23:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:57.453 16:23:17 -- common/autotest_common.sh@10 -- # set +x 00:13:57.453 ************************************ 00:13:57.453 START TEST ublk_recovery 00:13:57.453 ************************************ 00:13:57.453 16:23:17 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:13:57.716 * Looking for test storage... 00:13:57.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:13:57.716 16:23:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:13:57.716 16:23:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:13:57.716 16:23:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:13:57.716 16:23:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:13:57.716 16:23:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:13:57.716 16:23:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:13:57.716 16:23:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:13:57.716 16:23:17 -- scripts/common.sh@335 -- # IFS=.-: 00:13:57.716 16:23:17 -- scripts/common.sh@335 -- # read -ra ver1 00:13:57.716 16:23:17 -- scripts/common.sh@336 -- # IFS=.-: 00:13:57.716 16:23:17 -- scripts/common.sh@336 -- # read -ra ver2 00:13:57.716 16:23:17 -- scripts/common.sh@337 -- # local 'op=<' 00:13:57.716 16:23:17 -- scripts/common.sh@339 -- # ver1_l=2 00:13:57.716 16:23:17 -- scripts/common.sh@340 -- # ver2_l=1 00:13:57.716 16:23:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:13:57.716 16:23:17 -- scripts/common.sh@343 -- # case "$op" in 00:13:57.716 16:23:17 -- scripts/common.sh@344 -- # : 1 00:13:57.716 16:23:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:13:57.716 16:23:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:57.716 16:23:17 -- scripts/common.sh@364 -- # decimal 1 00:13:57.716 16:23:17 -- scripts/common.sh@352 -- # local d=1 00:13:57.716 16:23:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:57.716 16:23:17 -- scripts/common.sh@354 -- # echo 1 00:13:57.716 16:23:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:13:57.716 16:23:17 -- scripts/common.sh@365 -- # decimal 2 00:13:57.716 16:23:17 -- scripts/common.sh@352 -- # local d=2 00:13:57.716 16:23:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:57.716 16:23:17 -- scripts/common.sh@354 -- # echo 2 00:13:57.716 16:23:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:13:57.716 16:23:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:13:57.716 16:23:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:13:57.716 16:23:17 -- scripts/common.sh@367 -- # return 0 00:13:57.716 16:23:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:57.716 16:23:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:13:57.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.716 --rc genhtml_branch_coverage=1 00:13:57.716 --rc genhtml_function_coverage=1 00:13:57.716 --rc genhtml_legend=1 00:13:57.716 --rc geninfo_all_blocks=1 00:13:57.716 --rc geninfo_unexecuted_blocks=1 00:13:57.716 00:13:57.716 ' 00:13:57.716 16:23:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:13:57.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.716 --rc genhtml_branch_coverage=1 00:13:57.716 --rc genhtml_function_coverage=1 00:13:57.716 --rc genhtml_legend=1 00:13:57.716 --rc geninfo_all_blocks=1 00:13:57.716 --rc geninfo_unexecuted_blocks=1 00:13:57.716 00:13:57.716 ' 00:13:57.716 16:23:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:13:57.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.716 --rc genhtml_branch_coverage=1 00:13:57.716 --rc genhtml_function_coverage=1 00:13:57.716 --rc genhtml_legend=1 00:13:57.716 --rc geninfo_all_blocks=1 00:13:57.716 --rc geninfo_unexecuted_blocks=1 00:13:57.716 00:13:57.716 ' 00:13:57.716 16:23:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:13:57.716 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:57.716 --rc genhtml_branch_coverage=1 00:13:57.716 --rc genhtml_function_coverage=1 00:13:57.716 --rc genhtml_legend=1 00:13:57.716 --rc geninfo_all_blocks=1 00:13:57.716 --rc geninfo_unexecuted_blocks=1 00:13:57.716 00:13:57.716 ' 00:13:57.716 16:23:17 -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:13:57.716 16:23:17 -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:13:57.716 16:23:17 -- lvol/common.sh@7 -- # MALLOC_BS=512 00:13:57.716 16:23:17 -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:13:57.716 16:23:17 -- lvol/common.sh@9 -- # AIO_BS=4096 00:13:57.716 16:23:17 -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:13:57.716 16:23:17 -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:13:57.716 16:23:17 -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:13:57.716 16:23:17 -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:13:57.716 16:23:17 -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:13:57.716 16:23:17 -- ublk/ublk_recovery.sh@19 -- # spdk_pid=69728 00:13:57.716 16:23:17 -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:57.716 16:23:17 -- ublk/ublk_recovery.sh@21 -- # waitforlisten 69728 00:13:57.716 16:23:17 -- common/autotest_common.sh@829 -- # '[' -z 69728 ']' 00:13:57.716 16:23:17 -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:13:57.716 16:23:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.716 16:23:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:57.716 16:23:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.716 16:23:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:57.716 16:23:17 -- common/autotest_common.sh@10 -- # set +x 00:13:57.716 [2024-11-09 16:23:17.386710] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:13:57.716 [2024-11-09 16:23:17.386821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69728 ] 00:13:57.977 [2024-11-09 16:23:17.535619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:58.238 [2024-11-09 16:23:17.760119] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:58.238 [2024-11-09 16:23:17.760519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:58.238 [2024-11-09 16:23:17.760579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.183 16:23:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:59.183 16:23:18 -- common/autotest_common.sh@862 -- # return 0 00:13:59.183 16:23:18 -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:13:59.183 16:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.183 16:23:18 -- common/autotest_common.sh@10 -- # set +x 00:13:59.183 [2024-11-09 16:23:18.928520] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:13:59.183 16:23:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.183 16:23:18 -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:13:59.183 16:23:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.183 16:23:18 -- common/autotest_common.sh@10 -- # set +x 00:13:59.442 malloc0 00:13:59.442 16:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.442 16:23:19 -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:13:59.442 16:23:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.442 16:23:19 -- common/autotest_common.sh@10 -- # set +x 00:13:59.442 [2024-11-09 16:23:19.046386] ublk.c:1886:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:13:59.442 [2024-11-09 16:23:19.046494] ublk.c:1927:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:13:59.442 [2024-11-09 16:23:19.046509] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:13:59.442 [2024-11-09 16:23:19.046517] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:13:59.442 [2024-11-09 16:23:19.054412] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:13:59.442 [2024-11-09 16:23:19.054439] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:13:59.442 [2024-11-09 16:23:19.062252] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:13:59.442 [2024-11-09 16:23:19.062378] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:13:59.442 [2024-11-09 16:23:19.078254] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:13:59.442 1 00:13:59.442 16:23:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.442 16:23:19 -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:00.377 16:23:20 -- ublk/ublk_recovery.sh@31 -- # fio_proc=69772 00:14:00.377 16:23:20 -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:00.377 16:23:20 -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:00.635 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:00.635 fio-3.35 00:14:00.635 Starting 1 process 00:14:05.905 16:23:25 -- ublk/ublk_recovery.sh@36 -- # kill -9 69728 00:14:05.905 16:23:25 -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:11.189 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 69728 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:11.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:11.189 16:23:30 -- ublk/ublk_recovery.sh@42 -- # spdk_pid=69883 00:14:11.189 16:23:30 -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:11.189 16:23:30 -- ublk/ublk_recovery.sh@44 -- # waitforlisten 69883 00:14:11.189 16:23:30 -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:11.189 16:23:30 -- common/autotest_common.sh@829 -- # '[' -z 69883 ']' 00:14:11.189 16:23:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:11.189 16:23:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:11.189 16:23:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:11.189 16:23:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:11.189 16:23:30 -- common/autotest_common.sh@10 -- # set +x 00:14:11.189 [2024-11-09 16:23:30.183306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:14:11.189 [2024-11-09 16:23:30.183468] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69883 ] 00:14:11.189 [2024-11-09 16:23:30.342511] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:11.189 [2024-11-09 16:23:30.612775] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:11.189 [2024-11-09 16:23:30.613370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.189 [2024-11-09 16:23:30.613426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.134 16:23:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:12.134 16:23:31 -- common/autotest_common.sh@862 -- # return 0 00:14:12.134 16:23:31 -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:12.134 16:23:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.134 16:23:31 -- common/autotest_common.sh@10 -- # set +x 00:14:12.134 [2024-11-09 16:23:31.709726] ublk.c: 720:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:12.134 16:23:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.134 16:23:31 -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:12.134 16:23:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.134 16:23:31 -- common/autotest_common.sh@10 -- # set +x 00:14:12.134 malloc0 00:14:12.134 16:23:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.134 16:23:31 -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:12.134 16:23:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.134 16:23:31 -- common/autotest_common.sh@10 -- # set +x 00:14:12.134 [2024-11-09 16:23:31.843458] ublk.c:2073:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:12.134 [2024-11-09 16:23:31.843523] ublk.c: 933:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:12.134 [2024-11-09 16:23:31.843534] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:12.134 [2024-11-09 16:23:31.851327] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:12.134 [2024-11-09 16:23:31.851361] ublk.c:2002:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:12.134 [2024-11-09 16:23:31.851473] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:12.134 1 00:14:12.134 16:23:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.134 16:23:31 -- ublk/ublk_recovery.sh@52 -- # wait 69772 00:14:12.134 [2024-11-09 16:23:31.859276] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:12.134 [2024-11-09 16:23:31.868048] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:12.134 [2024-11-09 16:23:31.875545] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:12.134 [2024-11-09 16:23:31.875582] ublk.c: 377:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:08.381 00:15:08.381 fio_test: (groupid=0, jobs=1): err= 0: pid=69775: Sat Nov 9 16:24:20 2024 00:15:08.381 read: IOPS=24.8k, BW=96.9MiB/s (102MB/s)(5812MiB/60002msec) 00:15:08.381 slat (nsec): min=916, max=3463.6k, avg=5491.59, stdev=3894.04 00:15:08.381 clat (usec): min=725, max=6793.1k, avg=2524.39, stdev=43832.50 00:15:08.381 lat (usec): min=730, max=6793.1k, avg=2529.88, stdev=43832.49 00:15:08.381 clat percentiles (usec): 00:15:08.381 | 1.00th=[ 1827], 5.00th=[ 1926], 10.00th=[ 1975], 20.00th=[ 2040], 00:15:08.381 | 30.00th=[ 2073], 40.00th=[ 2089], 50.00th=[ 2114], 60.00th=[ 2114], 00:15:08.381 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2245], 95.00th=[ 3294], 00:15:08.381 | 99.00th=[ 5407], 99.50th=[ 5866], 99.90th=[ 7504], 99.95th=[ 8225], 00:15:08.381 | 99.99th=[12780] 00:15:08.381 bw ( KiB/s): min=42024, max=124880, per=100.00%, avg=111325.91, stdev=12804.46, samples=106 00:15:08.381 iops : min=10506, max=31220, avg=27831.47, stdev=3201.12, samples=106 00:15:08.381 write: IOPS=24.8k, BW=96.7MiB/s (101MB/s)(5805MiB/60002msec); 0 zone resets 00:15:08.381 slat (nsec): min=949, max=489312, avg=5700.49, stdev=1889.07 00:15:08.381 clat (usec): min=591, max=6793.1k, avg=2628.47, stdev=45251.62 00:15:08.381 lat (usec): min=596, max=6793.1k, avg=2634.17, stdev=45251.61 00:15:08.381 clat percentiles (usec): 00:15:08.381 | 1.00th=[ 1876], 5.00th=[ 2008], 10.00th=[ 2073], 20.00th=[ 2147], 00:15:08.381 | 30.00th=[ 2180], 40.00th=[ 2180], 50.00th=[ 2212], 60.00th=[ 2212], 00:15:08.381 | 70.00th=[ 2245], 80.00th=[ 2278], 90.00th=[ 2311], 95.00th=[ 3228], 00:15:08.381 | 99.00th=[ 5473], 99.50th=[ 5932], 99.90th=[ 7439], 99.95th=[ 8160], 00:15:08.381 | 99.99th=[12911] 00:15:08.381 bw ( KiB/s): min=42032, max=125216, per=100.00%, avg=111201.50, stdev=12936.74, samples=106 00:15:08.381 iops : min=10508, max=31304, avg=27800.37, stdev=3234.18, samples=106 00:15:08.381 lat (usec) : 750=0.01%, 1000=0.01% 00:15:08.381 lat (msec) : 2=8.23%, 4=88.57%, 10=3.16%, 20=0.03%, >=2000=0.01% 00:15:08.381 cpu : usr=5.66%, sys=28.10%, ctx=98219, majf=0, minf=13 00:15:08.381 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:08.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:08.381 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:08.381 issued rwts: total=1487877,1486086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:08.381 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:08.381 00:15:08.381 Run status group 0 (all jobs): 00:15:08.381 READ: bw=96.9MiB/s (102MB/s), 96.9MiB/s-96.9MiB/s (102MB/s-102MB/s), io=5812MiB (6094MB), run=60002-60002msec 00:15:08.382 WRITE: bw=96.7MiB/s (101MB/s), 96.7MiB/s-96.7MiB/s (101MB/s-101MB/s), io=5805MiB (6087MB), run=60002-60002msec 00:15:08.382 00:15:08.382 Disk stats (read/write): 00:15:08.382 ublkb1: ios=1484837/1483130, merge=0/0, ticks=3667463/3691273, in_queue=7358737, util=99.90% 00:15:08.382 16:24:20 -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:08.382 16:24:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.382 16:24:20 -- common/autotest_common.sh@10 -- # set +x 00:15:08.382 [2024-11-09 16:24:20.341214] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:08.382 [2024-11-09 16:24:20.373371] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:08.382 [2024-11-09 16:24:20.373539] ublk.c: 433:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:08.382 [2024-11-09 16:24:20.384256] ublk.c: 327:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:08.382 [2024-11-09 16:24:20.384350] ublk.c: 947:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:08.382 [2024-11-09 16:24:20.384358] ublk.c:1781:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:08.382 16:24:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.382 16:24:20 -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:08.382 16:24:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.382 16:24:20 -- common/autotest_common.sh@10 -- # set +x 00:15:08.382 [2024-11-09 16:24:20.400316] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:08.382 [2024-11-09 16:24:20.408241] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:08.382 [2024-11-09 16:24:20.408269] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:08.382 16:24:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.382 16:24:20 -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:08.382 16:24:20 -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:08.382 16:24:20 -- ublk/ublk_recovery.sh@14 -- # killprocess 69883 00:15:08.382 16:24:20 -- common/autotest_common.sh@936 -- # '[' -z 69883 ']' 00:15:08.382 16:24:20 -- common/autotest_common.sh@940 -- # kill -0 69883 00:15:08.382 16:24:20 -- common/autotest_common.sh@941 -- # uname 00:15:08.382 16:24:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.382 16:24:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69883 00:15:08.382 16:24:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:08.382 killing process with pid 69883 00:15:08.382 16:24:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:08.382 16:24:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69883' 00:15:08.382 16:24:20 -- common/autotest_common.sh@955 -- # kill 69883 00:15:08.382 16:24:20 -- common/autotest_common.sh@960 -- # wait 69883 00:15:08.382 [2024-11-09 16:24:21.492490] ublk.c: 797:_ublk_fini: *DEBUG*: finish shutdown 00:15:08.382 [2024-11-09 16:24:21.492540] ublk.c: 728:_ublk_fini_done: *DEBUG*: 00:15:08.382 00:15:08.382 real 1m5.081s 00:15:08.382 user 1m43.167s 00:15:08.382 sys 0m36.403s 00:15:08.382 16:24:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:08.382 ************************************ 00:15:08.382 END TEST ublk_recovery 00:15:08.382 ************************************ 00:15:08.382 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:15:08.382 16:24:22 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:15:08.382 16:24:22 -- spdk/autotest.sh@255 -- # timing_exit lib 00:15:08.382 16:24:22 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:08.382 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:15:08.382 16:24:22 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:15:08.382 16:24:22 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:15:08.382 16:24:22 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:15:08.382 16:24:22 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:15:08.382 16:24:22 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:15:08.382 16:24:22 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:15:08.382 16:24:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:08.382 16:24:22 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:08.382 16:24:22 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:15:08.382 16:24:22 -- spdk/autotest.sh@329 -- # '[' 1 -eq 1 ']' 00:15:08.382 16:24:22 -- spdk/autotest.sh@330 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:08.382 16:24:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:08.382 16:24:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:08.382 16:24:22 -- common/autotest_common.sh@10 -- # set +x 00:15:08.382 ************************************ 00:15:08.382 START TEST ftl 00:15:08.382 ************************************ 00:15:08.382 16:24:22 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:08.382 * Looking for test storage... 00:15:08.382 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:08.382 16:24:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:08.382 16:24:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:08.382 16:24:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:08.382 16:24:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:08.382 16:24:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:08.382 16:24:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:08.382 16:24:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:08.382 16:24:22 -- scripts/common.sh@335 -- # IFS=.-: 00:15:08.382 16:24:22 -- scripts/common.sh@335 -- # read -ra ver1 00:15:08.382 16:24:22 -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.382 16:24:22 -- scripts/common.sh@336 -- # read -ra ver2 00:15:08.382 16:24:22 -- scripts/common.sh@337 -- # local 'op=<' 00:15:08.382 16:24:22 -- scripts/common.sh@339 -- # ver1_l=2 00:15:08.382 16:24:22 -- scripts/common.sh@340 -- # ver2_l=1 00:15:08.382 16:24:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:08.382 16:24:22 -- scripts/common.sh@343 -- # case "$op" in 00:15:08.382 16:24:22 -- scripts/common.sh@344 -- # : 1 00:15:08.382 16:24:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:08.382 16:24:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.382 16:24:22 -- scripts/common.sh@364 -- # decimal 1 00:15:08.382 16:24:22 -- scripts/common.sh@352 -- # local d=1 00:15:08.382 16:24:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.382 16:24:22 -- scripts/common.sh@354 -- # echo 1 00:15:08.382 16:24:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:08.382 16:24:22 -- scripts/common.sh@365 -- # decimal 2 00:15:08.382 16:24:22 -- scripts/common.sh@352 -- # local d=2 00:15:08.382 16:24:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.382 16:24:22 -- scripts/common.sh@354 -- # echo 2 00:15:08.382 16:24:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:08.382 16:24:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:08.382 16:24:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:08.382 16:24:22 -- scripts/common.sh@367 -- # return 0 00:15:08.382 16:24:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.382 16:24:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:08.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.382 --rc genhtml_branch_coverage=1 00:15:08.382 --rc genhtml_function_coverage=1 00:15:08.382 --rc genhtml_legend=1 00:15:08.382 --rc geninfo_all_blocks=1 00:15:08.382 --rc geninfo_unexecuted_blocks=1 00:15:08.382 00:15:08.382 ' 00:15:08.382 16:24:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:08.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.382 --rc genhtml_branch_coverage=1 00:15:08.382 --rc genhtml_function_coverage=1 00:15:08.382 --rc genhtml_legend=1 00:15:08.382 --rc geninfo_all_blocks=1 00:15:08.382 --rc geninfo_unexecuted_blocks=1 00:15:08.382 00:15:08.382 ' 00:15:08.382 16:24:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:08.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.382 --rc genhtml_branch_coverage=1 00:15:08.382 --rc genhtml_function_coverage=1 00:15:08.382 --rc genhtml_legend=1 00:15:08.382 --rc geninfo_all_blocks=1 00:15:08.382 --rc geninfo_unexecuted_blocks=1 00:15:08.382 00:15:08.382 ' 00:15:08.382 16:24:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:08.382 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.382 --rc genhtml_branch_coverage=1 00:15:08.382 --rc genhtml_function_coverage=1 00:15:08.382 --rc genhtml_legend=1 00:15:08.382 --rc geninfo_all_blocks=1 00:15:08.382 --rc geninfo_unexecuted_blocks=1 00:15:08.382 00:15:08.382 ' 00:15:08.382 16:24:22 -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:08.382 16:24:22 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:08.382 16:24:22 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:08.382 16:24:22 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:08.382 16:24:22 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:08.382 16:24:22 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:08.382 16:24:22 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.382 16:24:22 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:08.382 16:24:22 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:08.382 16:24:22 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.382 16:24:22 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.382 16:24:22 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:08.382 16:24:22 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:08.382 16:24:22 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:08.382 16:24:22 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:08.382 16:24:22 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:08.382 16:24:22 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:08.382 16:24:22 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.382 16:24:22 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.382 16:24:22 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:08.382 16:24:22 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:08.382 16:24:22 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:08.382 16:24:22 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:08.382 16:24:22 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:08.382 16:24:22 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:08.382 16:24:22 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:08.382 16:24:22 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:08.383 16:24:22 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.383 16:24:22 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.383 16:24:22 -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.383 16:24:22 -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:08.383 16:24:22 -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:08.383 16:24:22 -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:08.383 16:24:22 -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:08.383 16:24:22 -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:08.383 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:08.383 0000:00:09.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:08.383 0000:00:08.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:08.383 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:08.383 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:08.383 16:24:23 -- ftl/ftl.sh@37 -- # spdk_tgt_pid=70698 00:15:08.383 16:24:23 -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:08.383 16:24:23 -- ftl/ftl.sh@38 -- # waitforlisten 70698 00:15:08.383 16:24:23 -- common/autotest_common.sh@829 -- # '[' -z 70698 ']' 00:15:08.383 16:24:23 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.383 16:24:23 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.383 16:24:23 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.383 16:24:23 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.383 16:24:23 -- common/autotest_common.sh@10 -- # set +x 00:15:08.383 [2024-11-09 16:24:23.151464] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:08.383 [2024-11-09 16:24:23.151610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70698 ] 00:15:08.383 [2024-11-09 16:24:23.304564] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.383 [2024-11-09 16:24:23.563469] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:08.383 [2024-11-09 16:24:23.563734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.383 16:24:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.383 16:24:23 -- common/autotest_common.sh@862 -- # return 0 00:15:08.383 16:24:23 -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:08.383 16:24:24 -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:08.383 16:24:24 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:08.383 16:24:24 -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:08.383 16:24:25 -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:08.383 16:24:25 -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:08.383 16:24:25 -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:08.383 16:24:25 -- ftl/ftl.sh@47 -- # cache_disks=0000:00:06.0 00:15:08.383 16:24:25 -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:08.383 16:24:25 -- ftl/ftl.sh@49 -- # nv_cache=0000:00:06.0 00:15:08.383 16:24:25 -- ftl/ftl.sh@50 -- # break 00:15:08.383 16:24:25 -- ftl/ftl.sh@53 -- # '[' -z 0000:00:06.0 ']' 00:15:08.383 16:24:25 -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:08.383 16:24:25 -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:08.383 16:24:25 -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:06.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:08.383 16:24:25 -- ftl/ftl.sh@60 -- # base_disks=0000:00:07.0 00:15:08.383 16:24:25 -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:08.383 16:24:25 -- ftl/ftl.sh@62 -- # device=0000:00:07.0 00:15:08.383 16:24:25 -- ftl/ftl.sh@63 -- # break 00:15:08.383 16:24:25 -- ftl/ftl.sh@66 -- # killprocess 70698 00:15:08.383 16:24:25 -- common/autotest_common.sh@936 -- # '[' -z 70698 ']' 00:15:08.383 16:24:25 -- common/autotest_common.sh@940 -- # kill -0 70698 00:15:08.383 16:24:25 -- common/autotest_common.sh@941 -- # uname 00:15:08.383 16:24:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:08.383 16:24:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70698 00:15:08.383 16:24:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:08.383 16:24:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:08.383 killing process with pid 70698 00:15:08.383 16:24:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70698' 00:15:08.383 16:24:25 -- common/autotest_common.sh@955 -- # kill 70698 00:15:08.383 16:24:25 -- common/autotest_common.sh@960 -- # wait 70698 00:15:08.383 16:24:26 -- ftl/ftl.sh@68 -- # '[' -z 0000:00:07.0 ']' 00:15:08.383 16:24:26 -- ftl/ftl.sh@73 -- # [[ -z '' ]] 00:15:08.383 16:24:26 -- ftl/ftl.sh@74 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:15:08.383 16:24:26 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:15:08.383 16:24:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:08.383 16:24:26 -- common/autotest_common.sh@10 -- # set +x 00:15:08.383 ************************************ 00:15:08.383 START TEST ftl_fio_basic 00:15:08.383 ************************************ 00:15:08.383 16:24:26 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:07.0 0000:00:06.0 basic 00:15:08.383 * Looking for test storage... 00:15:08.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:08.383 16:24:26 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:08.383 16:24:26 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:08.383 16:24:26 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:08.383 16:24:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:08.383 16:24:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:08.383 16:24:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:08.383 16:24:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:08.383 16:24:27 -- scripts/common.sh@335 -- # IFS=.-: 00:15:08.383 16:24:27 -- scripts/common.sh@335 -- # read -ra ver1 00:15:08.383 16:24:27 -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.383 16:24:27 -- scripts/common.sh@336 -- # read -ra ver2 00:15:08.383 16:24:27 -- scripts/common.sh@337 -- # local 'op=<' 00:15:08.383 16:24:27 -- scripts/common.sh@339 -- # ver1_l=2 00:15:08.383 16:24:27 -- scripts/common.sh@340 -- # ver2_l=1 00:15:08.383 16:24:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:08.383 16:24:27 -- scripts/common.sh@343 -- # case "$op" in 00:15:08.383 16:24:27 -- scripts/common.sh@344 -- # : 1 00:15:08.383 16:24:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:08.383 16:24:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.383 16:24:27 -- scripts/common.sh@364 -- # decimal 1 00:15:08.383 16:24:27 -- scripts/common.sh@352 -- # local d=1 00:15:08.383 16:24:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.383 16:24:27 -- scripts/common.sh@354 -- # echo 1 00:15:08.383 16:24:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:08.383 16:24:27 -- scripts/common.sh@365 -- # decimal 2 00:15:08.383 16:24:27 -- scripts/common.sh@352 -- # local d=2 00:15:08.383 16:24:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.383 16:24:27 -- scripts/common.sh@354 -- # echo 2 00:15:08.383 16:24:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:08.383 16:24:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:08.383 16:24:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:08.383 16:24:27 -- scripts/common.sh@367 -- # return 0 00:15:08.383 16:24:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.383 16:24:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:08.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.383 --rc genhtml_branch_coverage=1 00:15:08.383 --rc genhtml_function_coverage=1 00:15:08.383 --rc genhtml_legend=1 00:15:08.383 --rc geninfo_all_blocks=1 00:15:08.383 --rc geninfo_unexecuted_blocks=1 00:15:08.383 00:15:08.383 ' 00:15:08.383 16:24:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:08.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.383 --rc genhtml_branch_coverage=1 00:15:08.383 --rc genhtml_function_coverage=1 00:15:08.383 --rc genhtml_legend=1 00:15:08.383 --rc geninfo_all_blocks=1 00:15:08.383 --rc geninfo_unexecuted_blocks=1 00:15:08.383 00:15:08.383 ' 00:15:08.383 16:24:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:08.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.383 --rc genhtml_branch_coverage=1 00:15:08.383 --rc genhtml_function_coverage=1 00:15:08.383 --rc genhtml_legend=1 00:15:08.383 --rc geninfo_all_blocks=1 00:15:08.383 --rc geninfo_unexecuted_blocks=1 00:15:08.383 00:15:08.383 ' 00:15:08.383 16:24:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:08.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.383 --rc genhtml_branch_coverage=1 00:15:08.383 --rc genhtml_function_coverage=1 00:15:08.383 --rc genhtml_legend=1 00:15:08.383 --rc geninfo_all_blocks=1 00:15:08.383 --rc geninfo_unexecuted_blocks=1 00:15:08.383 00:15:08.383 ' 00:15:08.383 16:24:27 -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:08.383 16:24:27 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:08.383 16:24:27 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:08.383 16:24:27 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:08.383 16:24:27 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:08.383 16:24:27 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:08.383 16:24:27 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.383 16:24:27 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:08.383 16:24:27 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:08.383 16:24:27 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.383 16:24:27 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.383 16:24:27 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:08.383 16:24:27 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:08.383 16:24:27 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:08.383 16:24:27 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:08.383 16:24:27 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:08.383 16:24:27 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:08.383 16:24:27 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.384 16:24:27 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:08.384 16:24:27 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:08.384 16:24:27 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:08.384 16:24:27 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:08.384 16:24:27 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:08.384 16:24:27 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:08.384 16:24:27 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:08.384 16:24:27 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:08.384 16:24:27 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:08.384 16:24:27 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.384 16:24:27 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:08.384 16:24:27 -- ftl/fio.sh@11 -- # declare -A suite 00:15:08.384 16:24:27 -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:08.384 16:24:27 -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:08.384 16:24:27 -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:08.384 16:24:27 -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:08.384 16:24:27 -- ftl/fio.sh@23 -- # device=0000:00:07.0 00:15:08.384 16:24:27 -- ftl/fio.sh@24 -- # cache_device=0000:00:06.0 00:15:08.384 16:24:27 -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:08.384 16:24:27 -- ftl/fio.sh@26 -- # uuid= 00:15:08.384 16:24:27 -- ftl/fio.sh@27 -- # timeout=240 00:15:08.384 16:24:27 -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:08.384 16:24:27 -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:08.384 16:24:27 -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:08.384 16:24:27 -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:08.384 16:24:27 -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:08.384 16:24:27 -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:08.384 16:24:27 -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:08.384 16:24:27 -- ftl/fio.sh@45 -- # svcpid=70829 00:15:08.384 16:24:27 -- ftl/fio.sh@46 -- # waitforlisten 70829 00:15:08.384 16:24:27 -- common/autotest_common.sh@829 -- # '[' -z 70829 ']' 00:15:08.384 16:24:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.384 16:24:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.384 16:24:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.384 16:24:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.384 16:24:27 -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:08.384 16:24:27 -- common/autotest_common.sh@10 -- # set +x 00:15:08.384 [2024-11-09 16:24:27.172983] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:15:08.384 [2024-11-09 16:24:27.173123] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70829 ] 00:15:08.384 [2024-11-09 16:24:27.324747] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:08.384 [2024-11-09 16:24:27.496259] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:08.384 [2024-11-09 16:24:27.496638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.384 [2024-11-09 16:24:27.496931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.384 [2024-11-09 16:24:27.496945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.958 16:24:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.958 16:24:28 -- common/autotest_common.sh@862 -- # return 0 00:15:08.958 16:24:28 -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:15:08.958 16:24:28 -- ftl/common.sh@54 -- # local name=nvme0 00:15:08.958 16:24:28 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:15:08.958 16:24:28 -- ftl/common.sh@56 -- # local size=103424 00:15:08.958 16:24:28 -- ftl/common.sh@59 -- # local base_bdev 00:15:08.958 16:24:28 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:15:09.217 16:24:28 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:09.217 16:24:28 -- ftl/common.sh@62 -- # local base_size 00:15:09.217 16:24:28 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:09.217 16:24:28 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:15:09.217 16:24:28 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:09.217 16:24:28 -- common/autotest_common.sh@1369 -- # local bs 00:15:09.217 16:24:28 -- common/autotest_common.sh@1370 -- # local nb 00:15:09.217 16:24:28 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:09.475 16:24:29 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:09.475 { 00:15:09.476 "name": "nvme0n1", 00:15:09.476 "aliases": [ 00:15:09.476 "8131cd64-ebe6-48d0-a380-6c0e78842748" 00:15:09.476 ], 00:15:09.476 "product_name": "NVMe disk", 00:15:09.476 "block_size": 4096, 00:15:09.476 "num_blocks": 1310720, 00:15:09.476 "uuid": "8131cd64-ebe6-48d0-a380-6c0e78842748", 00:15:09.476 "assigned_rate_limits": { 00:15:09.476 "rw_ios_per_sec": 0, 00:15:09.476 "rw_mbytes_per_sec": 0, 00:15:09.476 "r_mbytes_per_sec": 0, 00:15:09.476 "w_mbytes_per_sec": 0 00:15:09.476 }, 00:15:09.476 "claimed": false, 00:15:09.476 "zoned": false, 00:15:09.476 "supported_io_types": { 00:15:09.476 "read": true, 00:15:09.476 "write": true, 00:15:09.476 "unmap": true, 00:15:09.476 "write_zeroes": true, 00:15:09.476 "flush": true, 00:15:09.476 "reset": true, 00:15:09.476 "compare": true, 00:15:09.476 "compare_and_write": false, 00:15:09.476 "abort": true, 00:15:09.476 "nvme_admin": true, 00:15:09.476 "nvme_io": true 00:15:09.476 }, 00:15:09.476 "driver_specific": { 00:15:09.476 "nvme": [ 00:15:09.476 { 00:15:09.476 "pci_address": "0000:00:07.0", 00:15:09.476 "trid": { 00:15:09.476 "trtype": "PCIe", 00:15:09.476 "traddr": "0000:00:07.0" 00:15:09.476 }, 00:15:09.476 "ctrlr_data": { 00:15:09.476 "cntlid": 0, 00:15:09.476 "vendor_id": "0x1b36", 00:15:09.476 "model_number": "QEMU NVMe Ctrl", 00:15:09.476 "serial_number": "12341", 00:15:09.476 "firmware_revision": "8.0.0", 00:15:09.476 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:09.476 "oacs": { 00:15:09.476 "security": 0, 00:15:09.476 "format": 1, 00:15:09.476 "firmware": 0, 00:15:09.476 "ns_manage": 1 00:15:09.476 }, 00:15:09.476 "multi_ctrlr": false, 00:15:09.476 "ana_reporting": false 00:15:09.476 }, 00:15:09.476 "vs": { 00:15:09.476 "nvme_version": "1.4" 00:15:09.476 }, 00:15:09.476 "ns_data": { 00:15:09.476 "id": 1, 00:15:09.476 "can_share": false 00:15:09.476 } 00:15:09.476 } 00:15:09.476 ], 00:15:09.476 "mp_policy": "active_passive" 00:15:09.476 } 00:15:09.476 } 00:15:09.476 ]' 00:15:09.476 16:24:29 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:09.476 16:24:29 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:09.476 16:24:29 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:09.476 16:24:29 -- common/autotest_common.sh@1373 -- # nb=1310720 00:15:09.476 16:24:29 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:15:09.476 16:24:29 -- common/autotest_common.sh@1377 -- # echo 5120 00:15:09.476 16:24:29 -- ftl/common.sh@63 -- # base_size=5120 00:15:09.476 16:24:29 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:09.476 16:24:29 -- ftl/common.sh@67 -- # clear_lvols 00:15:09.476 16:24:29 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:09.476 16:24:29 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:09.734 16:24:29 -- ftl/common.sh@28 -- # stores= 00:15:09.734 16:24:29 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:09.994 16:24:29 -- ftl/common.sh@68 -- # lvs=c9522424-d2ea-450d-b4f5-2845027c1db9 00:15:09.995 16:24:29 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u c9522424-d2ea-450d-b4f5-2845027c1db9 00:15:09.995 16:24:29 -- ftl/fio.sh@48 -- # split_bdev=219d1749-724d-4c16-ad8f-f4a2d184c1ed 00:15:09.995 16:24:29 -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:06.0 219d1749-724d-4c16-ad8f-f4a2d184c1ed 00:15:09.995 16:24:29 -- ftl/common.sh@35 -- # local name=nvc0 00:15:09.995 16:24:29 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:15:09.995 16:24:29 -- ftl/common.sh@37 -- # local base_bdev=219d1749-724d-4c16-ad8f-f4a2d184c1ed 00:15:09.995 16:24:29 -- ftl/common.sh@38 -- # local cache_size= 00:15:09.995 16:24:29 -- ftl/common.sh@41 -- # get_bdev_size 219d1749-724d-4c16-ad8f-f4a2d184c1ed 00:15:09.995 16:24:29 -- common/autotest_common.sh@1367 -- # local bdev_name=219d1749-724d-4c16-ad8f-f4a2d184c1ed 00:15:09.995 16:24:29 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:09.995 16:24:29 -- common/autotest_common.sh@1369 -- # local bs 00:15:09.995 16:24:29 -- common/autotest_common.sh@1370 -- # local nb 00:15:09.995 16:24:29 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 219d1749-724d-4c16-ad8f-f4a2d184c1ed 00:15:10.256 16:24:29 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:10.256 { 00:15:10.256 "name": "219d1749-724d-4c16-ad8f-f4a2d184c1ed", 00:15:10.256 "aliases": [ 00:15:10.256 "lvs/nvme0n1p0" 00:15:10.256 ], 00:15:10.256 "product_name": "Logical Volume", 00:15:10.256 "block_size": 4096, 00:15:10.256 "num_blocks": 26476544, 00:15:10.256 "uuid": "219d1749-724d-4c16-ad8f-f4a2d184c1ed", 00:15:10.256 "assigned_rate_limits": { 00:15:10.256 "rw_ios_per_sec": 0, 00:15:10.256 "rw_mbytes_per_sec": 0, 00:15:10.256 "r_mbytes_per_sec": 0, 00:15:10.256 "w_mbytes_per_sec": 0 00:15:10.256 }, 00:15:10.256 "claimed": false, 00:15:10.256 "zoned": false, 00:15:10.256 "supported_io_types": { 00:15:10.256 "read": true, 00:15:10.256 "write": true, 00:15:10.256 "unmap": true, 00:15:10.256 "write_zeroes": true, 00:15:10.256 "flush": false, 00:15:10.256 "reset": true, 00:15:10.256 "compare": false, 00:15:10.256 "compare_and_write": false, 00:15:10.256 "abort": false, 00:15:10.256 "nvme_admin": false, 00:15:10.256 "nvme_io": false 00:15:10.256 }, 00:15:10.256 "driver_specific": { 00:15:10.256 "lvol": { 00:15:10.256 "lvol_store_uuid": "c9522424-d2ea-450d-b4f5-2845027c1db9", 00:15:10.256 "base_bdev": "nvme0n1", 00:15:10.256 "thin_provision": true, 00:15:10.256 "snapshot": false, 00:15:10.256 "clone": false, 00:15:10.256 "esnap_clone": false 00:15:10.256 } 00:15:10.256 } 00:15:10.256 } 00:15:10.256 ]' 00:15:10.256 16:24:29 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:10.256 16:24:29 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:10.256 16:24:29 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:10.256 16:24:29 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:10.256 16:24:29 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:10.256 16:24:29 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:10.256 16:24:29 -- ftl/common.sh@41 -- # local base_size=5171 00:15:10.256 16:24:29 -- ftl/common.sh@44 -- # local nvc_bdev 00:15:10.256 16:24:29 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:15:10.514 16:24:30 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:10.514 16:24:30 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:10.514 16:24:30 -- ftl/common.sh@48 -- # get_bdev_size 219d1749-724d-4c16-ad8f-f4a2d184c1ed 00:15:10.514 16:24:30 -- common/autotest_common.sh@1367 -- # local bdev_name=219d1749-724d-4c16-ad8f-f4a2d184c1ed 00:15:10.514 16:24:30 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:10.514 16:24:30 -- common/autotest_common.sh@1369 -- # local bs 00:15:10.514 16:24:30 -- common/autotest_common.sh@1370 -- # local nb 00:15:10.514 16:24:30 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 219d1749-724d-4c16-ad8f-f4a2d184c1ed 00:15:10.772 16:24:30 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:10.772 { 00:15:10.772 "name": "219d1749-724d-4c16-ad8f-f4a2d184c1ed", 00:15:10.772 "aliases": [ 00:15:10.772 "lvs/nvme0n1p0" 00:15:10.772 ], 00:15:10.772 "product_name": "Logical Volume", 00:15:10.772 "block_size": 4096, 00:15:10.772 "num_blocks": 26476544, 00:15:10.772 "uuid": "219d1749-724d-4c16-ad8f-f4a2d184c1ed", 00:15:10.772 "assigned_rate_limits": { 00:15:10.772 "rw_ios_per_sec": 0, 00:15:10.772 "rw_mbytes_per_sec": 0, 00:15:10.772 "r_mbytes_per_sec": 0, 00:15:10.772 "w_mbytes_per_sec": 0 00:15:10.772 }, 00:15:10.772 "claimed": false, 00:15:10.772 "zoned": false, 00:15:10.772 "supported_io_types": { 00:15:10.772 "read": true, 00:15:10.772 "write": true, 00:15:10.772 "unmap": true, 00:15:10.772 "write_zeroes": true, 00:15:10.772 "flush": false, 00:15:10.773 "reset": true, 00:15:10.773 "compare": false, 00:15:10.773 "compare_and_write": false, 00:15:10.773 "abort": false, 00:15:10.773 "nvme_admin": false, 00:15:10.773 "nvme_io": false 00:15:10.773 }, 00:15:10.773 "driver_specific": { 00:15:10.773 "lvol": { 00:15:10.773 "lvol_store_uuid": "c9522424-d2ea-450d-b4f5-2845027c1db9", 00:15:10.773 "base_bdev": "nvme0n1", 00:15:10.773 "thin_provision": true, 00:15:10.773 "snapshot": false, 00:15:10.773 "clone": false, 00:15:10.773 "esnap_clone": false 00:15:10.773 } 00:15:10.773 } 00:15:10.773 } 00:15:10.773 ]' 00:15:10.773 16:24:30 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:10.773 16:24:30 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:10.773 16:24:30 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:10.773 16:24:30 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:10.773 16:24:30 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:10.773 16:24:30 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:10.773 16:24:30 -- ftl/common.sh@48 -- # cache_size=5171 00:15:10.773 16:24:30 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:11.031 16:24:30 -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:11.031 16:24:30 -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:11.031 16:24:30 -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:11.031 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:11.031 16:24:30 -- ftl/fio.sh@56 -- # get_bdev_size 219d1749-724d-4c16-ad8f-f4a2d184c1ed 00:15:11.031 16:24:30 -- common/autotest_common.sh@1367 -- # local bdev_name=219d1749-724d-4c16-ad8f-f4a2d184c1ed 00:15:11.031 16:24:30 -- common/autotest_common.sh@1368 -- # local bdev_info 00:15:11.031 16:24:30 -- common/autotest_common.sh@1369 -- # local bs 00:15:11.031 16:24:30 -- common/autotest_common.sh@1370 -- # local nb 00:15:11.031 16:24:30 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 219d1749-724d-4c16-ad8f-f4a2d184c1ed 00:15:11.290 16:24:30 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:15:11.290 { 00:15:11.290 "name": "219d1749-724d-4c16-ad8f-f4a2d184c1ed", 00:15:11.290 "aliases": [ 00:15:11.290 "lvs/nvme0n1p0" 00:15:11.290 ], 00:15:11.290 "product_name": "Logical Volume", 00:15:11.290 "block_size": 4096, 00:15:11.290 "num_blocks": 26476544, 00:15:11.290 "uuid": "219d1749-724d-4c16-ad8f-f4a2d184c1ed", 00:15:11.290 "assigned_rate_limits": { 00:15:11.290 "rw_ios_per_sec": 0, 00:15:11.290 "rw_mbytes_per_sec": 0, 00:15:11.290 "r_mbytes_per_sec": 0, 00:15:11.290 "w_mbytes_per_sec": 0 00:15:11.290 }, 00:15:11.290 "claimed": false, 00:15:11.290 "zoned": false, 00:15:11.290 "supported_io_types": { 00:15:11.290 "read": true, 00:15:11.290 "write": true, 00:15:11.290 "unmap": true, 00:15:11.290 "write_zeroes": true, 00:15:11.290 "flush": false, 00:15:11.290 "reset": true, 00:15:11.290 "compare": false, 00:15:11.290 "compare_and_write": false, 00:15:11.290 "abort": false, 00:15:11.290 "nvme_admin": false, 00:15:11.290 "nvme_io": false 00:15:11.290 }, 00:15:11.290 "driver_specific": { 00:15:11.290 "lvol": { 00:15:11.290 "lvol_store_uuid": "c9522424-d2ea-450d-b4f5-2845027c1db9", 00:15:11.290 "base_bdev": "nvme0n1", 00:15:11.290 "thin_provision": true, 00:15:11.290 "snapshot": false, 00:15:11.290 "clone": false, 00:15:11.290 "esnap_clone": false 00:15:11.290 } 00:15:11.290 } 00:15:11.290 } 00:15:11.290 ]' 00:15:11.290 16:24:30 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:15:11.290 16:24:30 -- common/autotest_common.sh@1372 -- # bs=4096 00:15:11.290 16:24:30 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:15:11.290 16:24:30 -- common/autotest_common.sh@1373 -- # nb=26476544 00:15:11.290 16:24:30 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:15:11.290 16:24:30 -- common/autotest_common.sh@1377 -- # echo 103424 00:15:11.290 16:24:30 -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:11.290 16:24:30 -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:11.290 16:24:30 -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 219d1749-724d-4c16-ad8f-f4a2d184c1ed -c nvc0n1p0 --l2p_dram_limit 60 00:15:11.549 [2024-11-09 16:24:31.095904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.549 [2024-11-09 16:24:31.095945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:11.549 [2024-11-09 16:24:31.095960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:15:11.549 [2024-11-09 16:24:31.095967] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.549 [2024-11-09 16:24:31.096035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.549 [2024-11-09 16:24:31.096045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:11.549 [2024-11-09 16:24:31.096054] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:15:11.549 [2024-11-09 16:24:31.096061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.549 [2024-11-09 16:24:31.096090] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:11.549 [2024-11-09 16:24:31.096697] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:11.549 [2024-11-09 16:24:31.096725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.549 [2024-11-09 16:24:31.096732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:11.549 [2024-11-09 16:24:31.096741] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.638 ms 00:15:11.549 [2024-11-09 16:24:31.096747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.549 [2024-11-09 16:24:31.096809] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8f2c86cb-02e0-41b0-820f-cc3082adaaef 00:15:11.549 [2024-11-09 16:24:31.098071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.549 [2024-11-09 16:24:31.098100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:11.549 [2024-11-09 16:24:31.098109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:15:11.549 [2024-11-09 16:24:31.098117] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.549 [2024-11-09 16:24:31.104764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.549 [2024-11-09 16:24:31.104797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:11.549 [2024-11-09 16:24:31.104804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.588 ms 00:15:11.549 [2024-11-09 16:24:31.104813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.549 [2024-11-09 16:24:31.104889] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.549 [2024-11-09 16:24:31.104899] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:11.549 [2024-11-09 16:24:31.104905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:15:11.549 [2024-11-09 16:24:31.104916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.549 [2024-11-09 16:24:31.104970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.549 [2024-11-09 16:24:31.104980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:11.549 [2024-11-09 16:24:31.104986] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:11.549 [2024-11-09 16:24:31.104996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.549 [2024-11-09 16:24:31.105022] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:11.549 [2024-11-09 16:24:31.108342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.549 [2024-11-09 16:24:31.108365] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:11.549 [2024-11-09 16:24:31.108375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:15:11.549 [2024-11-09 16:24:31.108380] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.549 [2024-11-09 16:24:31.108418] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.549 [2024-11-09 16:24:31.108425] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:11.549 [2024-11-09 16:24:31.108434] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:11.549 [2024-11-09 16:24:31.108440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.549 [2024-11-09 16:24:31.108465] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:11.549 [2024-11-09 16:24:31.108558] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:15:11.549 [2024-11-09 16:24:31.108571] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:11.549 [2024-11-09 16:24:31.108580] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:15:11.549 [2024-11-09 16:24:31.108590] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:11.549 [2024-11-09 16:24:31.108598] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:11.549 [2024-11-09 16:24:31.108606] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:11.549 [2024-11-09 16:24:31.108612] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:11.549 [2024-11-09 16:24:31.108623] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:15:11.549 [2024-11-09 16:24:31.108628] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:15:11.549 [2024-11-09 16:24:31.108636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.549 [2024-11-09 16:24:31.108642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:11.549 [2024-11-09 16:24:31.108649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:15:11.549 [2024-11-09 16:24:31.108655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.549 [2024-11-09 16:24:31.108713] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.549 [2024-11-09 16:24:31.108721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:11.549 [2024-11-09 16:24:31.108728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:15:11.549 [2024-11-09 16:24:31.108733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.549 [2024-11-09 16:24:31.108811] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:11.550 [2024-11-09 16:24:31.108819] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:11.550 [2024-11-09 16:24:31.108827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:11.550 [2024-11-09 16:24:31.108833] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:11.550 [2024-11-09 16:24:31.108841] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:11.550 [2024-11-09 16:24:31.108846] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:11.550 [2024-11-09 16:24:31.108853] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:11.550 [2024-11-09 16:24:31.108858] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:11.550 [2024-11-09 16:24:31.108865] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:11.550 [2024-11-09 16:24:31.108870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:11.550 [2024-11-09 16:24:31.108876] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:11.550 [2024-11-09 16:24:31.108883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:11.550 [2024-11-09 16:24:31.108891] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:11.550 [2024-11-09 16:24:31.108896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:11.550 [2024-11-09 16:24:31.108902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:15:11.550 [2024-11-09 16:24:31.108907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:11.550 [2024-11-09 16:24:31.108915] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:11.550 [2024-11-09 16:24:31.108920] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:15:11.550 [2024-11-09 16:24:31.108927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:11.550 [2024-11-09 16:24:31.108932] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:15:11.550 [2024-11-09 16:24:31.108939] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:15:11.550 [2024-11-09 16:24:31.108944] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:15:11.550 [2024-11-09 16:24:31.108950] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:11.550 [2024-11-09 16:24:31.108955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:11.550 [2024-11-09 16:24:31.108962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:11.550 [2024-11-09 16:24:31.108967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:11.550 [2024-11-09 16:24:31.108973] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:15:11.550 [2024-11-09 16:24:31.108978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:11.550 [2024-11-09 16:24:31.108991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:11.550 [2024-11-09 16:24:31.108997] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:11.550 [2024-11-09 16:24:31.109003] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:11.550 [2024-11-09 16:24:31.109008] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:11.550 [2024-11-09 16:24:31.109015] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:15:11.550 [2024-11-09 16:24:31.109034] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:15:11.550 [2024-11-09 16:24:31.109041] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:11.550 [2024-11-09 16:24:31.109046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:11.550 [2024-11-09 16:24:31.109053] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:11.550 [2024-11-09 16:24:31.109058] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:11.550 [2024-11-09 16:24:31.109065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:15:11.550 [2024-11-09 16:24:31.109070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:11.550 [2024-11-09 16:24:31.109076] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:11.550 [2024-11-09 16:24:31.109082] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:11.550 [2024-11-09 16:24:31.109089] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:11.550 [2024-11-09 16:24:31.109094] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:11.550 [2024-11-09 16:24:31.109103] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:11.550 [2024-11-09 16:24:31.109108] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:11.550 [2024-11-09 16:24:31.109115] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:11.550 [2024-11-09 16:24:31.109120] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:11.550 [2024-11-09 16:24:31.109128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:11.550 [2024-11-09 16:24:31.109135] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:11.550 [2024-11-09 16:24:31.109142] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:11.550 [2024-11-09 16:24:31.109150] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:11.550 [2024-11-09 16:24:31.109161] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:11.550 [2024-11-09 16:24:31.109167] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:15:11.550 [2024-11-09 16:24:31.109188] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:15:11.550 [2024-11-09 16:24:31.109194] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:15:11.550 [2024-11-09 16:24:31.109202] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:15:11.550 [2024-11-09 16:24:31.109207] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:15:11.550 [2024-11-09 16:24:31.109214] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:15:11.550 [2024-11-09 16:24:31.109219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:15:11.550 [2024-11-09 16:24:31.109249] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:15:11.550 [2024-11-09 16:24:31.109255] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:15:11.550 [2024-11-09 16:24:31.109263] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:15:11.550 [2024-11-09 16:24:31.109269] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:15:11.550 [2024-11-09 16:24:31.109280] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:15:11.550 [2024-11-09 16:24:31.109286] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:11.550 [2024-11-09 16:24:31.109294] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:11.550 [2024-11-09 16:24:31.109304] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:11.550 [2024-11-09 16:24:31.109311] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:11.550 [2024-11-09 16:24:31.109316] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:11.550 [2024-11-09 16:24:31.109323] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:11.550 [2024-11-09 16:24:31.109333] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.550 [2024-11-09 16:24:31.109340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:11.550 [2024-11-09 16:24:31.109347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:15:11.550 [2024-11-09 16:24:31.109353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.550 [2024-11-09 16:24:31.123088] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.550 [2024-11-09 16:24:31.123123] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:11.550 [2024-11-09 16:24:31.123132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.666 ms 00:15:11.550 [2024-11-09 16:24:31.123141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.550 [2024-11-09 16:24:31.123241] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.550 [2024-11-09 16:24:31.123252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:11.550 [2024-11-09 16:24:31.123261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:15:11.550 [2024-11-09 16:24:31.123269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.550 [2024-11-09 16:24:31.151198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.550 [2024-11-09 16:24:31.151239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:11.550 [2024-11-09 16:24:31.151248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.882 ms 00:15:11.550 [2024-11-09 16:24:31.151257] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.550 [2024-11-09 16:24:31.151290] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.550 [2024-11-09 16:24:31.151299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:11.550 [2024-11-09 16:24:31.151307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:15:11.550 [2024-11-09 16:24:31.151315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.550 [2024-11-09 16:24:31.151711] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.550 [2024-11-09 16:24:31.151737] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:11.550 [2024-11-09 16:24:31.151745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.356 ms 00:15:11.550 [2024-11-09 16:24:31.151754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.550 [2024-11-09 16:24:31.151861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.550 [2024-11-09 16:24:31.151873] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:11.550 [2024-11-09 16:24:31.151879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:15:11.550 [2024-11-09 16:24:31.151886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.551 [2024-11-09 16:24:31.183726] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.551 [2024-11-09 16:24:31.183760] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:11.551 [2024-11-09 16:24:31.183770] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.819 ms 00:15:11.551 [2024-11-09 16:24:31.183779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.551 [2024-11-09 16:24:31.193934] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:11.551 [2024-11-09 16:24:31.209137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.551 [2024-11-09 16:24:31.209166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:11.551 [2024-11-09 16:24:31.209183] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.264 ms 00:15:11.551 [2024-11-09 16:24:31.209189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.551 [2024-11-09 16:24:31.257715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:11.551 [2024-11-09 16:24:31.257748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:11.551 [2024-11-09 16:24:31.257759] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.494 ms 00:15:11.551 [2024-11-09 16:24:31.257765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:11.551 [2024-11-09 16:24:31.257806] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:15:11.551 [2024-11-09 16:24:31.257816] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:15:14.834 [2024-11-09 16:24:33.900343] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.834 [2024-11-09 16:24:33.900403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:14.834 [2024-11-09 16:24:33.900420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2642.529 ms 00:15:14.834 [2024-11-09 16:24:33.900429] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.834 [2024-11-09 16:24:33.900634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.834 [2024-11-09 16:24:33.900646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:14.834 [2024-11-09 16:24:33.900657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.158 ms 00:15:14.834 [2024-11-09 16:24:33.900665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.834 [2024-11-09 16:24:33.924045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.834 [2024-11-09 16:24:33.924078] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:14.834 [2024-11-09 16:24:33.924092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.328 ms 00:15:14.834 [2024-11-09 16:24:33.924100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.834 [2024-11-09 16:24:33.946578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.834 [2024-11-09 16:24:33.946749] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:14.834 [2024-11-09 16:24:33.946773] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.436 ms 00:15:14.834 [2024-11-09 16:24:33.946780] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.834 [2024-11-09 16:24:33.947100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.834 [2024-11-09 16:24:33.947113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:14.834 [2024-11-09 16:24:33.947123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.283 ms 00:15:14.834 [2024-11-09 16:24:33.947131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.834 [2024-11-09 16:24:34.008605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.834 [2024-11-09 16:24:34.008726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:14.834 [2024-11-09 16:24:34.008786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.433 ms 00:15:14.834 [2024-11-09 16:24:34.008810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.834 [2024-11-09 16:24:34.034065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.834 [2024-11-09 16:24:34.034175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:14.834 [2024-11-09 16:24:34.034253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.202 ms 00:15:14.834 [2024-11-09 16:24:34.034264] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.834 [2024-11-09 16:24:34.038638] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.834 [2024-11-09 16:24:34.038669] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:15:14.834 [2024-11-09 16:24:34.038684] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.334 ms 00:15:14.834 [2024-11-09 16:24:34.038692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.834 [2024-11-09 16:24:34.061832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.834 [2024-11-09 16:24:34.061862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:14.834 [2024-11-09 16:24:34.061874] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.096 ms 00:15:14.834 [2024-11-09 16:24:34.061881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.834 [2024-11-09 16:24:34.061946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.834 [2024-11-09 16:24:34.061955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:14.834 [2024-11-09 16:24:34.061965] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:15:14.834 [2024-11-09 16:24:34.061972] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.834 [2024-11-09 16:24:34.062065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:14.834 [2024-11-09 16:24:34.062075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:14.834 [2024-11-09 16:24:34.062087] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:15:14.835 [2024-11-09 16:24:34.062094] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:14.835 [2024-11-09 16:24:34.063213] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2966.849 ms, result 0 00:15:14.835 { 00:15:14.835 "name": "ftl0", 00:15:14.835 "uuid": "8f2c86cb-02e0-41b0-820f-cc3082adaaef" 00:15:14.835 } 00:15:14.835 16:24:34 -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:14.835 16:24:34 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:15:14.835 16:24:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:14.835 16:24:34 -- common/autotest_common.sh@899 -- # local i 00:15:14.835 16:24:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:14.835 16:24:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:14.835 16:24:34 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:14.835 16:24:34 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:14.835 [ 00:15:14.835 { 00:15:14.835 "name": "ftl0", 00:15:14.835 "aliases": [ 00:15:14.835 "8f2c86cb-02e0-41b0-820f-cc3082adaaef" 00:15:14.835 ], 00:15:14.835 "product_name": "FTL disk", 00:15:14.835 "block_size": 4096, 00:15:14.835 "num_blocks": 20971520, 00:15:14.835 "uuid": "8f2c86cb-02e0-41b0-820f-cc3082adaaef", 00:15:14.835 "assigned_rate_limits": { 00:15:14.835 "rw_ios_per_sec": 0, 00:15:14.835 "rw_mbytes_per_sec": 0, 00:15:14.835 "r_mbytes_per_sec": 0, 00:15:14.835 "w_mbytes_per_sec": 0 00:15:14.835 }, 00:15:14.835 "claimed": false, 00:15:14.835 "zoned": false, 00:15:14.835 "supported_io_types": { 00:15:14.835 "read": true, 00:15:14.835 "write": true, 00:15:14.835 "unmap": true, 00:15:14.835 "write_zeroes": true, 00:15:14.835 "flush": true, 00:15:14.835 "reset": false, 00:15:14.835 "compare": false, 00:15:14.835 "compare_and_write": false, 00:15:14.835 "abort": false, 00:15:14.835 "nvme_admin": false, 00:15:14.835 "nvme_io": false 00:15:14.835 }, 00:15:14.835 "driver_specific": { 00:15:14.835 "ftl": { 00:15:14.835 "base_bdev": "219d1749-724d-4c16-ad8f-f4a2d184c1ed", 00:15:14.835 "cache": "nvc0n1p0" 00:15:14.835 } 00:15:14.835 } 00:15:14.835 } 00:15:14.835 ] 00:15:14.835 16:24:34 -- common/autotest_common.sh@905 -- # return 0 00:15:14.835 16:24:34 -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:14.835 16:24:34 -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:15:15.094 16:24:34 -- ftl/fio.sh@70 -- # echo ']}' 00:15:15.094 16:24:34 -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:15:15.094 [2024-11-09 16:24:34.819622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.094 [2024-11-09 16:24:34.819663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:15:15.094 [2024-11-09 16:24:34.819674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:15.094 [2024-11-09 16:24:34.819682] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.094 [2024-11-09 16:24:34.819707] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:15:15.094 [2024-11-09 16:24:34.821850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.094 [2024-11-09 16:24:34.821874] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:15:15.094 [2024-11-09 16:24:34.821887] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.127 ms 00:15:15.094 [2024-11-09 16:24:34.821894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.094 [2024-11-09 16:24:34.822312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.094 [2024-11-09 16:24:34.822329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:15:15.094 [2024-11-09 16:24:34.822338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.390 ms 00:15:15.094 [2024-11-09 16:24:34.822345] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.094 [2024-11-09 16:24:34.824818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.094 [2024-11-09 16:24:34.824962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:15:15.094 [2024-11-09 16:24:34.824976] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.451 ms 00:15:15.094 [2024-11-09 16:24:34.824984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.094 [2024-11-09 16:24:34.829779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.094 [2024-11-09 16:24:34.829802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:15:15.094 [2024-11-09 16:24:34.829811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.766 ms 00:15:15.094 [2024-11-09 16:24:34.829818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.094 [2024-11-09 16:24:34.848572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.094 [2024-11-09 16:24:34.848598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:15:15.094 [2024-11-09 16:24:34.848607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.672 ms 00:15:15.094 [2024-11-09 16:24:34.848613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.094 [2024-11-09 16:24:34.860979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.094 [2024-11-09 16:24:34.861005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:15:15.094 [2024-11-09 16:24:34.861027] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.327 ms 00:15:15.094 [2024-11-09 16:24:34.861034] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.094 [2024-11-09 16:24:34.861190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.094 [2024-11-09 16:24:34.861200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:15:15.094 [2024-11-09 16:24:34.861210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:15:15.094 [2024-11-09 16:24:34.861216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.354 [2024-11-09 16:24:34.878877] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.354 [2024-11-09 16:24:34.878901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:15:15.354 [2024-11-09 16:24:34.878910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.624 ms 00:15:15.354 [2024-11-09 16:24:34.878916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.354 [2024-11-09 16:24:34.896613] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.354 [2024-11-09 16:24:34.896636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:15:15.354 [2024-11-09 16:24:34.896645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.657 ms 00:15:15.354 [2024-11-09 16:24:34.896651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.354 [2024-11-09 16:24:34.914076] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.354 [2024-11-09 16:24:34.914100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:15:15.354 [2024-11-09 16:24:34.914109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.388 ms 00:15:15.354 [2024-11-09 16:24:34.914115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.354 [2024-11-09 16:24:34.931178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.354 [2024-11-09 16:24:34.931305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:15:15.354 [2024-11-09 16:24:34.931322] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.977 ms 00:15:15.354 [2024-11-09 16:24:34.931328] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.354 [2024-11-09 16:24:34.931363] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:15:15.354 [2024-11-09 16:24:34.931377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:15:15.354 [2024-11-09 16:24:34.931660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.931996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.932004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.932009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.932017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.932023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.932034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.932042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.932051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.932056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.932064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:15:15.355 [2024-11-09 16:24:34.932076] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:15:15.355 [2024-11-09 16:24:34.932085] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8f2c86cb-02e0-41b0-820f-cc3082adaaef 00:15:15.355 [2024-11-09 16:24:34.932091] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:15:15.355 [2024-11-09 16:24:34.932098] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:15:15.355 [2024-11-09 16:24:34.932103] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:15:15.355 [2024-11-09 16:24:34.932111] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:15:15.355 [2024-11-09 16:24:34.932116] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:15:15.355 [2024-11-09 16:24:34.932124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:15:15.355 [2024-11-09 16:24:34.932130] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:15:15.355 [2024-11-09 16:24:34.932136] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:15:15.355 [2024-11-09 16:24:34.932142] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:15:15.355 [2024-11-09 16:24:34.932150] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.355 [2024-11-09 16:24:34.932158] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:15:15.355 [2024-11-09 16:24:34.932166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:15:15.355 [2024-11-09 16:24:34.932172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.355 [2024-11-09 16:24:34.942325] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.355 [2024-11-09 16:24:34.942421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:15:15.355 [2024-11-09 16:24:34.942437] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.116 ms 00:15:15.355 [2024-11-09 16:24:34.942442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.355 [2024-11-09 16:24:34.942612] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:15.355 [2024-11-09 16:24:34.942620] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:15:15.355 [2024-11-09 16:24:34.942627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:15:15.355 [2024-11-09 16:24:34.942632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.355 [2024-11-09 16:24:34.979331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.355 [2024-11-09 16:24:34.979357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:15.355 [2024-11-09 16:24:34.979367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.355 [2024-11-09 16:24:34.979373] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.355 [2024-11-09 16:24:34.979440] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.355 [2024-11-09 16:24:34.979447] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:15.355 [2024-11-09 16:24:34.979454] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.355 [2024-11-09 16:24:34.979461] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.355 [2024-11-09 16:24:34.979530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.355 [2024-11-09 16:24:34.979539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:15.355 [2024-11-09 16:24:34.979547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.355 [2024-11-09 16:24:34.979552] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.355 [2024-11-09 16:24:34.979573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.355 [2024-11-09 16:24:34.979581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:15.355 [2024-11-09 16:24:34.979589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.355 [2024-11-09 16:24:34.979595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.355 [2024-11-09 16:24:35.047546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.355 [2024-11-09 16:24:35.047582] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:15.356 [2024-11-09 16:24:35.047595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.356 [2024-11-09 16:24:35.047602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.356 [2024-11-09 16:24:35.071535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.356 [2024-11-09 16:24:35.071561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:15.356 [2024-11-09 16:24:35.071570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.356 [2024-11-09 16:24:35.071577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.356 [2024-11-09 16:24:35.071637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.356 [2024-11-09 16:24:35.071645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:15.356 [2024-11-09 16:24:35.071652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.356 [2024-11-09 16:24:35.071658] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.356 [2024-11-09 16:24:35.071717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.356 [2024-11-09 16:24:35.071725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:15.356 [2024-11-09 16:24:35.071734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.356 [2024-11-09 16:24:35.071740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.356 [2024-11-09 16:24:35.071836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.356 [2024-11-09 16:24:35.071845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:15.356 [2024-11-09 16:24:35.071853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.356 [2024-11-09 16:24:35.071858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.356 [2024-11-09 16:24:35.071901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.356 [2024-11-09 16:24:35.071909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:15:15.356 [2024-11-09 16:24:35.071916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.356 [2024-11-09 16:24:35.071924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.356 [2024-11-09 16:24:35.071964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.356 [2024-11-09 16:24:35.071970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:15.356 [2024-11-09 16:24:35.071977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.356 [2024-11-09 16:24:35.071984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.356 [2024-11-09 16:24:35.072039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:15:15.356 [2024-11-09 16:24:35.072047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:15.356 [2024-11-09 16:24:35.072056] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:15:15.356 [2024-11-09 16:24:35.072062] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:15.356 [2024-11-09 16:24:35.072221] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 252.566 ms, result 0 00:15:15.356 true 00:15:15.356 16:24:35 -- ftl/fio.sh@75 -- # killprocess 70829 00:15:15.356 16:24:35 -- common/autotest_common.sh@936 -- # '[' -z 70829 ']' 00:15:15.356 16:24:35 -- common/autotest_common.sh@940 -- # kill -0 70829 00:15:15.356 16:24:35 -- common/autotest_common.sh@941 -- # uname 00:15:15.356 16:24:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:15.356 16:24:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70829 00:15:15.356 killing process with pid 70829 00:15:15.356 16:24:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:15.356 16:24:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:15.356 16:24:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70829' 00:15:15.356 16:24:35 -- common/autotest_common.sh@955 -- # kill 70829 00:15:15.356 16:24:35 -- common/autotest_common.sh@960 -- # wait 70829 00:15:21.971 16:24:40 -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:15:21.971 16:24:40 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:21.971 16:24:40 -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:15:21.971 16:24:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:21.971 16:24:40 -- common/autotest_common.sh@10 -- # set +x 00:15:21.971 16:24:40 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:21.971 16:24:40 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:21.971 16:24:40 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:21.971 16:24:40 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:21.971 16:24:40 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:21.971 16:24:40 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.971 16:24:40 -- common/autotest_common.sh@1330 -- # shift 00:15:21.971 16:24:40 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:21.971 16:24:40 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:21.971 16:24:40 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:21.971 16:24:40 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:21.971 16:24:40 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:21.971 16:24:40 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:21.971 16:24:40 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:21.971 16:24:40 -- common/autotest_common.sh@1336 -- # break 00:15:21.971 16:24:40 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:21.971 16:24:40 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:15:21.971 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:15:21.971 fio-3.35 00:15:21.971 Starting 1 thread 00:15:27.265 00:15:27.265 test: (groupid=0, jobs=1): err= 0: pid=71050: Sat Nov 9 16:24:47 2024 00:15:27.265 read: IOPS=788, BW=52.4MiB/s (54.9MB/s)(255MiB/4862msec) 00:15:27.265 slat (nsec): min=2941, max=31070, avg=4565.51, stdev=2331.78 00:15:27.265 clat (usec): min=285, max=5254, avg=572.39, stdev=170.08 00:15:27.265 lat (usec): min=288, max=5257, avg=576.95, stdev=170.37 00:15:27.265 clat percentiles (usec): 00:15:27.265 | 1.00th=[ 318], 5.00th=[ 437], 10.00th=[ 461], 20.00th=[ 486], 00:15:27.265 | 30.00th=[ 506], 40.00th=[ 506], 50.00th=[ 519], 60.00th=[ 529], 00:15:27.265 | 70.00th=[ 553], 80.00th=[ 603], 90.00th=[ 848], 95.00th=[ 906], 00:15:27.265 | 99.00th=[ 1090], 99.50th=[ 1156], 99.90th=[ 1303], 99.95th=[ 1745], 00:15:27.265 | 99.99th=[ 5276] 00:15:27.265 write: IOPS=793, BW=52.7MiB/s (55.3MB/s)(256MiB/4857msec); 0 zone resets 00:15:27.265 slat (nsec): min=13567, max=77715, avg=21703.12, stdev=6569.77 00:15:27.265 clat (usec): min=317, max=1706, avg=653.59, stdev=169.89 00:15:27.265 lat (usec): min=340, max=1726, avg=675.30, stdev=170.99 00:15:27.265 clat percentiles (usec): 00:15:27.265 | 1.00th=[ 400], 5.00th=[ 486], 10.00th=[ 529], 20.00th=[ 562], 00:15:27.265 | 30.00th=[ 586], 40.00th=[ 594], 50.00th=[ 603], 60.00th=[ 611], 00:15:27.265 | 70.00th=[ 635], 80.00th=[ 685], 90.00th=[ 922], 95.00th=[ 979], 00:15:27.265 | 99.00th=[ 1270], 99.50th=[ 1467], 99.90th=[ 1647], 99.95th=[ 1680], 00:15:27.265 | 99.99th=[ 1713] 00:15:27.265 bw ( KiB/s): min=45968, max=61200, per=100.00%, avg=55140.44, stdev=4990.53, samples=9 00:15:27.265 iops : min= 676, max= 900, avg=810.89, stdev=73.39, samples=9 00:15:27.265 lat (usec) : 500=15.58%, 750=68.06%, 1000=13.30% 00:15:27.265 lat (msec) : 2=3.04%, 10=0.01% 00:15:27.265 cpu : usr=99.40%, sys=0.02%, ctx=10, majf=0, minf=1318 00:15:27.265 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:27.265 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.265 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:27.265 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:27.265 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:27.265 00:15:27.265 Run status group 0 (all jobs): 00:15:27.265 READ: bw=52.4MiB/s (54.9MB/s), 52.4MiB/s-52.4MiB/s (54.9MB/s-54.9MB/s), io=255MiB (267MB), run=4862-4862msec 00:15:27.265 WRITE: bw=52.7MiB/s (55.3MB/s), 52.7MiB/s-52.7MiB/s (55.3MB/s-55.3MB/s), io=256MiB (269MB), run=4857-4857msec 00:15:29.231 ----------------------------------------------------- 00:15:29.231 Suppressions used: 00:15:29.231 count bytes template 00:15:29.231 1 5 /usr/src/fio/parse.c 00:15:29.231 1 8 libtcmalloc_minimal.so 00:15:29.231 1 904 libcrypto.so 00:15:29.231 ----------------------------------------------------- 00:15:29.231 00:15:29.231 16:24:48 -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:15:29.231 16:24:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:29.231 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:15:29.231 16:24:48 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:29.231 16:24:48 -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:15:29.231 16:24:48 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:29.231 16:24:48 -- common/autotest_common.sh@10 -- # set +x 00:15:29.231 16:24:48 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:29.231 16:24:48 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:29.231 16:24:48 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:29.231 16:24:48 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:29.231 16:24:48 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:29.231 16:24:48 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:29.231 16:24:48 -- common/autotest_common.sh@1330 -- # shift 00:15:29.231 16:24:48 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:29.231 16:24:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:29.231 16:24:48 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:29.231 16:24:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:29.231 16:24:48 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:29.231 16:24:48 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:29.231 16:24:48 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:29.231 16:24:48 -- common/autotest_common.sh@1336 -- # break 00:15:29.231 16:24:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:29.232 16:24:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:15:29.232 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:29.232 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:29.232 fio-3.35 00:15:29.232 Starting 2 threads 00:15:55.789 00:15:55.789 first_half: (groupid=0, jobs=1): err= 0: pid=71164: Sat Nov 9 16:25:11 2024 00:15:55.789 read: IOPS=3087, BW=12.1MiB/s (12.6MB/s)(255MiB/21131msec) 00:15:55.789 slat (nsec): min=2889, max=45674, avg=4068.24, stdev=1163.42 00:15:55.789 clat (usec): min=552, max=389393, avg=32380.29, stdev=16848.04 00:15:55.789 lat (usec): min=556, max=389398, avg=32384.36, stdev=16848.19 00:15:55.789 clat percentiles (msec): 00:15:55.789 | 1.00th=[ 6], 5.00th=[ 26], 10.00th=[ 26], 20.00th=[ 27], 00:15:55.789 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 30], 00:15:55.789 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 38], 95.00th=[ 45], 00:15:55.789 | 99.00th=[ 121], 99.50th=[ 133], 99.90th=[ 180], 99.95th=[ 309], 00:15:55.789 | 99.99th=[ 376] 00:15:55.789 write: IOPS=4013, BW=15.7MiB/s (16.4MB/s)(256MiB/16327msec); 0 zone resets 00:15:55.789 slat (usec): min=3, max=2113, avg= 6.20, stdev=15.37 00:15:55.789 clat (usec): min=346, max=73816, avg=9012.82, stdev=13472.75 00:15:55.789 lat (usec): min=351, max=73822, avg=9019.02, stdev=13473.26 00:15:55.789 clat percentiles (usec): 00:15:55.789 | 1.00th=[ 594], 5.00th=[ 734], 10.00th=[ 840], 20.00th=[ 1045], 00:15:55.789 | 30.00th=[ 2114], 40.00th=[ 3326], 50.00th=[ 4490], 60.00th=[ 5145], 00:15:55.789 | 70.00th=[ 5866], 80.00th=[10945], 90.00th=[23200], 95.00th=[50594], 00:15:55.789 | 99.00th=[58459], 99.50th=[61080], 99.90th=[69731], 99.95th=[71828], 00:15:55.789 | 99.99th=[72877] 00:15:55.789 bw ( KiB/s): min= 8, max=43264, per=86.00%, avg=24960.05, stdev=14479.51, samples=21 00:15:55.789 iops : min= 2, max=10816, avg=6239.95, stdev=3619.86, samples=21 00:15:55.789 lat (usec) : 500=0.08%, 750=2.82%, 1000=6.32% 00:15:55.789 lat (msec) : 2=5.72%, 4=8.30%, 10=16.65%, 20=4.41%, 50=50.82% 00:15:55.789 lat (msec) : 100=4.13%, 250=0.72%, 500=0.04% 00:15:55.789 cpu : usr=99.21%, sys=0.23%, ctx=35, majf=0, minf=5581 00:15:55.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:55.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.789 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:55.789 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:55.789 second_half: (groupid=0, jobs=1): err= 0: pid=71165: Sat Nov 9 16:25:11 2024 00:15:55.789 read: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(255MiB/21303msec) 00:15:55.789 slat (nsec): min=3073, max=43030, avg=5528.43, stdev=1159.00 00:15:55.789 clat (usec): min=539, max=400962, avg=32057.45, stdev=18651.18 00:15:55.789 lat (usec): min=546, max=400968, avg=32062.98, stdev=18651.30 00:15:55.789 clat percentiles (msec): 00:15:55.789 | 1.00th=[ 8], 5.00th=[ 24], 10.00th=[ 26], 20.00th=[ 27], 00:15:55.789 | 30.00th=[ 29], 40.00th=[ 29], 50.00th=[ 30], 60.00th=[ 30], 00:15:55.789 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 44], 00:15:55.789 | 99.00th=[ 131], 99.50th=[ 144], 99.90th=[ 205], 99.95th=[ 284], 00:15:55.789 | 99.99th=[ 393] 00:15:55.789 write: IOPS=3627, BW=14.2MiB/s (14.9MB/s)(256MiB/18064msec); 0 zone resets 00:15:55.789 slat (usec): min=3, max=2044, avg= 6.99, stdev=16.65 00:15:55.789 clat (usec): min=312, max=74281, avg=9668.71, stdev=14266.77 00:15:55.789 lat (usec): min=321, max=74289, avg=9675.70, stdev=14267.28 00:15:55.789 clat percentiles (usec): 00:15:55.789 | 1.00th=[ 570], 5.00th=[ 717], 10.00th=[ 816], 20.00th=[ 1029], 00:15:55.789 | 30.00th=[ 2114], 40.00th=[ 2868], 50.00th=[ 3752], 60.00th=[ 4686], 00:15:55.789 | 70.00th=[ 5997], 80.00th=[15926], 90.00th=[27395], 95.00th=[51119], 00:15:55.789 | 99.00th=[58983], 99.50th=[62129], 99.90th=[71828], 99.95th=[72877], 00:15:55.789 | 99.99th=[73925] 00:15:55.790 bw ( KiB/s): min= 1016, max=61728, per=90.31%, avg=26211.80, stdev=16286.53, samples=20 00:15:55.790 iops : min= 254, max=15432, avg=6552.95, stdev=4071.63, samples=20 00:15:55.790 lat (usec) : 500=0.09%, 750=3.23%, 1000=6.31% 00:15:55.790 lat (msec) : 2=5.00%, 4=11.85%, 10=12.94%, 20=4.19%, 50=51.54% 00:15:55.790 lat (msec) : 100=3.83%, 250=0.99%, 500=0.03% 00:15:55.790 cpu : usr=99.37%, sys=0.21%, ctx=43, majf=0, minf=5536 00:15:55.790 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:15:55.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:55.790 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:55.790 issued rwts: total=65251,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:55.790 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:55.790 00:15:55.790 Run status group 0 (all jobs): 00:15:55.790 READ: bw=23.9MiB/s (25.1MB/s), 12.0MiB/s-12.1MiB/s (12.5MB/s-12.6MB/s), io=510MiB (534MB), run=21131-21303msec 00:15:55.790 WRITE: bw=28.3MiB/s (29.7MB/s), 14.2MiB/s-15.7MiB/s (14.9MB/s-16.4MB/s), io=512MiB (537MB), run=16327-18064msec 00:15:55.790 ----------------------------------------------------- 00:15:55.790 Suppressions used: 00:15:55.790 count bytes template 00:15:55.790 2 10 /usr/src/fio/parse.c 00:15:55.790 2 192 /usr/src/fio/iolog.c 00:15:55.790 1 8 libtcmalloc_minimal.so 00:15:55.790 1 904 libcrypto.so 00:15:55.790 ----------------------------------------------------- 00:15:55.790 00:15:55.790 16:25:13 -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:15:55.790 16:25:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:55.790 16:25:13 -- common/autotest_common.sh@10 -- # set +x 00:15:55.790 16:25:13 -- ftl/fio.sh@78 -- # for test in ${tests} 00:15:55.790 16:25:13 -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:15:55.790 16:25:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:55.790 16:25:13 -- common/autotest_common.sh@10 -- # set +x 00:15:55.790 16:25:13 -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:55.790 16:25:13 -- common/autotest_common.sh@1345 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:55.790 16:25:13 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:15:55.790 16:25:13 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:55.790 16:25:13 -- common/autotest_common.sh@1328 -- # local sanitizers 00:15:55.790 16:25:13 -- common/autotest_common.sh@1329 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:55.790 16:25:13 -- common/autotest_common.sh@1330 -- # shift 00:15:55.790 16:25:13 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:15:55.790 16:25:13 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:15:55.790 16:25:13 -- common/autotest_common.sh@1334 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:55.790 16:25:13 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:15:55.790 16:25:13 -- common/autotest_common.sh@1334 -- # grep libasan 00:15:55.790 16:25:13 -- common/autotest_common.sh@1334 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:55.790 16:25:13 -- common/autotest_common.sh@1335 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:55.790 16:25:13 -- common/autotest_common.sh@1336 -- # break 00:15:55.790 16:25:13 -- common/autotest_common.sh@1341 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:55.790 16:25:13 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:15:55.790 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:15:55.790 fio-3.35 00:15:55.790 Starting 1 thread 00:16:07.995 00:16:07.995 test: (groupid=0, jobs=1): err= 0: pid=71455: Sat Nov 9 16:25:27 2024 00:16:07.995 read: IOPS=8415, BW=32.9MiB/s (34.5MB/s)(255MiB/7748msec) 00:16:07.995 slat (nsec): min=3020, max=35553, avg=4624.29, stdev=971.63 00:16:07.995 clat (usec): min=509, max=31662, avg=15202.30, stdev=1801.91 00:16:07.995 lat (usec): min=514, max=31665, avg=15206.92, stdev=1801.93 00:16:07.995 clat percentiles (usec): 00:16:07.995 | 1.00th=[13173], 5.00th=[13435], 10.00th=[13566], 20.00th=[14484], 00:16:07.995 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14877], 60.00th=[15008], 00:16:07.995 | 70.00th=[15139], 80.00th=[15401], 90.00th=[16909], 95.00th=[18744], 00:16:07.995 | 99.00th=[22938], 99.50th=[24249], 99.90th=[28967], 99.95th=[30016], 00:16:07.995 | 99.99th=[30802] 00:16:07.995 write: IOPS=14.4k, BW=56.4MiB/s (59.1MB/s)(256MiB/4540msec); 0 zone resets 00:16:07.995 slat (usec): min=4, max=114, avg= 6.17, stdev= 2.22 00:16:07.995 clat (usec): min=455, max=50940, avg=8826.06, stdev=10146.16 00:16:07.995 lat (usec): min=460, max=50946, avg=8832.23, stdev=10146.19 00:16:07.995 clat percentiles (usec): 00:16:07.995 | 1.00th=[ 603], 5.00th=[ 709], 10.00th=[ 799], 20.00th=[ 947], 00:16:07.995 | 30.00th=[ 1139], 40.00th=[ 1549], 50.00th=[ 5407], 60.00th=[ 6587], 00:16:07.996 | 70.00th=[10421], 80.00th=[14615], 90.00th=[26084], 95.00th=[32900], 00:16:07.996 | 99.00th=[37487], 99.50th=[39584], 99.90th=[42206], 99.95th=[43254], 00:16:07.996 | 99.99th=[48497] 00:16:07.996 bw ( KiB/s): min= 2912, max=84720, per=90.80%, avg=52428.80, stdev=22080.19, samples=10 00:16:07.996 iops : min= 728, max=21180, avg=13107.20, stdev=5520.05, samples=10 00:16:07.996 lat (usec) : 500=0.02%, 750=3.62%, 1000=7.92% 00:16:07.996 lat (msec) : 2=8.93%, 4=0.71%, 10=13.60%, 20=55.49%, 50=9.70% 00:16:07.996 lat (msec) : 100=0.01% 00:16:07.996 cpu : usr=99.28%, sys=0.26%, ctx=18, majf=0, minf=5567 00:16:07.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:07.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:07.996 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:07.996 00:16:07.996 Run status group 0 (all jobs): 00:16:07.996 READ: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=255MiB (267MB), run=7748-7748msec 00:16:07.996 WRITE: bw=56.4MiB/s (59.1MB/s), 56.4MiB/s-56.4MiB/s (59.1MB/s-59.1MB/s), io=256MiB (268MB), run=4540-4540msec 00:16:08.940 ----------------------------------------------------- 00:16:08.940 Suppressions used: 00:16:08.940 count bytes template 00:16:08.940 1 5 /usr/src/fio/parse.c 00:16:08.940 2 192 /usr/src/fio/iolog.c 00:16:08.940 1 8 libtcmalloc_minimal.so 00:16:08.940 1 904 libcrypto.so 00:16:08.940 ----------------------------------------------------- 00:16:08.940 00:16:08.940 16:25:28 -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:08.940 16:25:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:08.940 16:25:28 -- common/autotest_common.sh@10 -- # set +x 00:16:08.940 16:25:28 -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:08.940 Remove shared memory files 00:16:08.940 16:25:28 -- ftl/fio.sh@85 -- # remove_shm 00:16:08.940 16:25:28 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:08.940 16:25:28 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:08.940 16:25:28 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:08.940 16:25:28 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid56159 /dev/shm/spdk_tgt_trace.pid69728 00:16:08.940 16:25:28 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:08.940 16:25:28 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:08.940 ************************************ 00:16:08.940 END TEST ftl_fio_basic 00:16:08.940 ************************************ 00:16:08.940 00:16:08.940 real 1m1.753s 00:16:08.940 user 2m2.942s 00:16:08.940 sys 0m12.984s 00:16:08.940 16:25:28 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:08.940 16:25:28 -- common/autotest_common.sh@10 -- # set +x 00:16:09.202 16:25:28 -- ftl/ftl.sh@75 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:16:09.202 16:25:28 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:09.202 16:25:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:09.202 16:25:28 -- common/autotest_common.sh@10 -- # set +x 00:16:09.202 ************************************ 00:16:09.202 START TEST ftl_bdevperf 00:16:09.202 ************************************ 00:16:09.202 16:25:28 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:07.0 0000:00:06.0 00:16:09.202 * Looking for test storage... 00:16:09.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:09.202 16:25:28 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:09.202 16:25:28 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:09.202 16:25:28 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:09.202 16:25:28 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:09.202 16:25:28 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:09.202 16:25:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:09.202 16:25:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:09.202 16:25:28 -- scripts/common.sh@335 -- # IFS=.-: 00:16:09.202 16:25:28 -- scripts/common.sh@335 -- # read -ra ver1 00:16:09.202 16:25:28 -- scripts/common.sh@336 -- # IFS=.-: 00:16:09.202 16:25:28 -- scripts/common.sh@336 -- # read -ra ver2 00:16:09.202 16:25:28 -- scripts/common.sh@337 -- # local 'op=<' 00:16:09.202 16:25:28 -- scripts/common.sh@339 -- # ver1_l=2 00:16:09.202 16:25:28 -- scripts/common.sh@340 -- # ver2_l=1 00:16:09.202 16:25:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:09.202 16:25:28 -- scripts/common.sh@343 -- # case "$op" in 00:16:09.202 16:25:28 -- scripts/common.sh@344 -- # : 1 00:16:09.202 16:25:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:09.202 16:25:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:09.202 16:25:28 -- scripts/common.sh@364 -- # decimal 1 00:16:09.202 16:25:28 -- scripts/common.sh@352 -- # local d=1 00:16:09.202 16:25:28 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:09.202 16:25:28 -- scripts/common.sh@354 -- # echo 1 00:16:09.202 16:25:28 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:09.202 16:25:28 -- scripts/common.sh@365 -- # decimal 2 00:16:09.202 16:25:28 -- scripts/common.sh@352 -- # local d=2 00:16:09.202 16:25:28 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:09.202 16:25:28 -- scripts/common.sh@354 -- # echo 2 00:16:09.202 16:25:28 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:09.202 16:25:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:09.202 16:25:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:09.202 16:25:28 -- scripts/common.sh@367 -- # return 0 00:16:09.202 16:25:28 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:09.202 16:25:28 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:09.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.202 --rc genhtml_branch_coverage=1 00:16:09.202 --rc genhtml_function_coverage=1 00:16:09.202 --rc genhtml_legend=1 00:16:09.202 --rc geninfo_all_blocks=1 00:16:09.202 --rc geninfo_unexecuted_blocks=1 00:16:09.202 00:16:09.202 ' 00:16:09.202 16:25:28 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:09.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.202 --rc genhtml_branch_coverage=1 00:16:09.202 --rc genhtml_function_coverage=1 00:16:09.202 --rc genhtml_legend=1 00:16:09.202 --rc geninfo_all_blocks=1 00:16:09.202 --rc geninfo_unexecuted_blocks=1 00:16:09.202 00:16:09.202 ' 00:16:09.202 16:25:28 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:09.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.202 --rc genhtml_branch_coverage=1 00:16:09.202 --rc genhtml_function_coverage=1 00:16:09.202 --rc genhtml_legend=1 00:16:09.202 --rc geninfo_all_blocks=1 00:16:09.202 --rc geninfo_unexecuted_blocks=1 00:16:09.202 00:16:09.202 ' 00:16:09.202 16:25:28 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:09.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:09.202 --rc genhtml_branch_coverage=1 00:16:09.202 --rc genhtml_function_coverage=1 00:16:09.202 --rc genhtml_legend=1 00:16:09.202 --rc geninfo_all_blocks=1 00:16:09.202 --rc geninfo_unexecuted_blocks=1 00:16:09.202 00:16:09.202 ' 00:16:09.202 16:25:28 -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:09.202 16:25:28 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:09.202 16:25:28 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:09.202 16:25:28 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:09.202 16:25:28 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:09.202 16:25:28 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:09.203 16:25:28 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.203 16:25:28 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:09.203 16:25:28 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:09.203 16:25:28 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.203 16:25:28 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.203 16:25:28 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:09.203 16:25:28 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:09.203 16:25:28 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:09.203 16:25:28 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:09.203 16:25:28 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:09.203 16:25:28 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:09.203 16:25:28 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.203 16:25:28 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:09.203 16:25:28 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:09.203 16:25:28 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:09.203 16:25:28 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:09.203 16:25:28 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:09.203 16:25:28 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:09.203 16:25:28 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:09.203 16:25:28 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:09.203 16:25:28 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:09.203 16:25:28 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:09.203 16:25:28 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:09.203 16:25:28 -- ftl/bdevperf.sh@11 -- # device=0000:00:07.0 00:16:09.203 16:25:28 -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:06.0 00:16:09.203 16:25:28 -- ftl/bdevperf.sh@13 -- # use_append= 00:16:09.203 16:25:28 -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:09.203 16:25:28 -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:09.203 16:25:28 -- ftl/bdevperf.sh@17 -- # timing_enter '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:09.203 16:25:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:09.203 16:25:28 -- common/autotest_common.sh@10 -- # set +x 00:16:09.203 16:25:28 -- ftl/bdevperf.sh@19 -- # bdevperf_pid=71683 00:16:09.203 16:25:28 -- ftl/bdevperf.sh@21 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:09.203 16:25:28 -- ftl/bdevperf.sh@22 -- # waitforlisten 71683 00:16:09.203 16:25:28 -- ftl/bdevperf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:09.203 16:25:28 -- common/autotest_common.sh@829 -- # '[' -z 71683 ']' 00:16:09.203 16:25:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.203 16:25:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.203 16:25:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.203 16:25:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.203 16:25:28 -- common/autotest_common.sh@10 -- # set +x 00:16:09.463 [2024-11-09 16:25:28.991926] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:09.463 [2024-11-09 16:25:28.992854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71683 ] 00:16:09.463 [2024-11-09 16:25:29.142929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.724 [2024-11-09 16:25:29.362134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.298 16:25:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.298 16:25:29 -- common/autotest_common.sh@862 -- # return 0 00:16:10.298 16:25:29 -- ftl/bdevperf.sh@23 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:10.298 16:25:29 -- ftl/common.sh@54 -- # local name=nvme0 00:16:10.298 16:25:29 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:10.298 16:25:29 -- ftl/common.sh@56 -- # local size=103424 00:16:10.298 16:25:29 -- ftl/common.sh@59 -- # local base_bdev 00:16:10.298 16:25:29 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:10.559 16:25:30 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:10.559 16:25:30 -- ftl/common.sh@62 -- # local base_size 00:16:10.559 16:25:30 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:10.559 16:25:30 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:10.559 16:25:30 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:10.559 16:25:30 -- common/autotest_common.sh@1369 -- # local bs 00:16:10.559 16:25:30 -- common/autotest_common.sh@1370 -- # local nb 00:16:10.559 16:25:30 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:10.820 16:25:30 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:10.820 { 00:16:10.820 "name": "nvme0n1", 00:16:10.820 "aliases": [ 00:16:10.820 "a2a4ea7e-4ed0-48c7-ac10-b47ee696fd7a" 00:16:10.820 ], 00:16:10.820 "product_name": "NVMe disk", 00:16:10.820 "block_size": 4096, 00:16:10.820 "num_blocks": 1310720, 00:16:10.820 "uuid": "a2a4ea7e-4ed0-48c7-ac10-b47ee696fd7a", 00:16:10.820 "assigned_rate_limits": { 00:16:10.820 "rw_ios_per_sec": 0, 00:16:10.820 "rw_mbytes_per_sec": 0, 00:16:10.820 "r_mbytes_per_sec": 0, 00:16:10.820 "w_mbytes_per_sec": 0 00:16:10.820 }, 00:16:10.820 "claimed": true, 00:16:10.820 "claim_type": "read_many_write_one", 00:16:10.820 "zoned": false, 00:16:10.820 "supported_io_types": { 00:16:10.820 "read": true, 00:16:10.820 "write": true, 00:16:10.820 "unmap": true, 00:16:10.820 "write_zeroes": true, 00:16:10.820 "flush": true, 00:16:10.820 "reset": true, 00:16:10.820 "compare": true, 00:16:10.820 "compare_and_write": false, 00:16:10.820 "abort": true, 00:16:10.820 "nvme_admin": true, 00:16:10.820 "nvme_io": true 00:16:10.820 }, 00:16:10.820 "driver_specific": { 00:16:10.820 "nvme": [ 00:16:10.820 { 00:16:10.820 "pci_address": "0000:00:07.0", 00:16:10.820 "trid": { 00:16:10.820 "trtype": "PCIe", 00:16:10.820 "traddr": "0000:00:07.0" 00:16:10.820 }, 00:16:10.820 "ctrlr_data": { 00:16:10.820 "cntlid": 0, 00:16:10.820 "vendor_id": "0x1b36", 00:16:10.820 "model_number": "QEMU NVMe Ctrl", 00:16:10.820 "serial_number": "12341", 00:16:10.820 "firmware_revision": "8.0.0", 00:16:10.820 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:10.820 "oacs": { 00:16:10.820 "security": 0, 00:16:10.821 "format": 1, 00:16:10.821 "firmware": 0, 00:16:10.821 "ns_manage": 1 00:16:10.821 }, 00:16:10.821 "multi_ctrlr": false, 00:16:10.821 "ana_reporting": false 00:16:10.821 }, 00:16:10.821 "vs": { 00:16:10.821 "nvme_version": "1.4" 00:16:10.821 }, 00:16:10.821 "ns_data": { 00:16:10.821 "id": 1, 00:16:10.821 "can_share": false 00:16:10.821 } 00:16:10.821 } 00:16:10.821 ], 00:16:10.821 "mp_policy": "active_passive" 00:16:10.821 } 00:16:10.821 } 00:16:10.821 ]' 00:16:10.821 16:25:30 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:10.821 16:25:30 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:10.821 16:25:30 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:10.821 16:25:30 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:10.821 16:25:30 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:10.821 16:25:30 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:10.821 16:25:30 -- ftl/common.sh@63 -- # base_size=5120 00:16:10.821 16:25:30 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:10.821 16:25:30 -- ftl/common.sh@67 -- # clear_lvols 00:16:10.821 16:25:30 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:10.821 16:25:30 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:11.083 16:25:30 -- ftl/common.sh@28 -- # stores=c9522424-d2ea-450d-b4f5-2845027c1db9 00:16:11.083 16:25:30 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:11.083 16:25:30 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c9522424-d2ea-450d-b4f5-2845027c1db9 00:16:11.083 16:25:30 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:11.344 16:25:31 -- ftl/common.sh@68 -- # lvs=f44aadd9-b246-4f87-b12f-05b9c3d5e52b 00:16:11.344 16:25:31 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f44aadd9-b246-4f87-b12f-05b9c3d5e52b 00:16:11.605 16:25:31 -- ftl/bdevperf.sh@23 -- # split_bdev=61843b1c-be6e-49d7-81f7-bb3df42a145b 00:16:11.605 16:25:31 -- ftl/bdevperf.sh@24 -- # create_nv_cache_bdev nvc0 0000:00:06.0 61843b1c-be6e-49d7-81f7-bb3df42a145b 00:16:11.605 16:25:31 -- ftl/common.sh@35 -- # local name=nvc0 00:16:11.605 16:25:31 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:11.605 16:25:31 -- ftl/common.sh@37 -- # local base_bdev=61843b1c-be6e-49d7-81f7-bb3df42a145b 00:16:11.605 16:25:31 -- ftl/common.sh@38 -- # local cache_size= 00:16:11.605 16:25:31 -- ftl/common.sh@41 -- # get_bdev_size 61843b1c-be6e-49d7-81f7-bb3df42a145b 00:16:11.605 16:25:31 -- common/autotest_common.sh@1367 -- # local bdev_name=61843b1c-be6e-49d7-81f7-bb3df42a145b 00:16:11.605 16:25:31 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:11.605 16:25:31 -- common/autotest_common.sh@1369 -- # local bs 00:16:11.605 16:25:31 -- common/autotest_common.sh@1370 -- # local nb 00:16:11.605 16:25:31 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61843b1c-be6e-49d7-81f7-bb3df42a145b 00:16:11.864 16:25:31 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:11.864 { 00:16:11.864 "name": "61843b1c-be6e-49d7-81f7-bb3df42a145b", 00:16:11.864 "aliases": [ 00:16:11.864 "lvs/nvme0n1p0" 00:16:11.864 ], 00:16:11.864 "product_name": "Logical Volume", 00:16:11.864 "block_size": 4096, 00:16:11.864 "num_blocks": 26476544, 00:16:11.864 "uuid": "61843b1c-be6e-49d7-81f7-bb3df42a145b", 00:16:11.864 "assigned_rate_limits": { 00:16:11.864 "rw_ios_per_sec": 0, 00:16:11.864 "rw_mbytes_per_sec": 0, 00:16:11.864 "r_mbytes_per_sec": 0, 00:16:11.864 "w_mbytes_per_sec": 0 00:16:11.864 }, 00:16:11.864 "claimed": false, 00:16:11.864 "zoned": false, 00:16:11.864 "supported_io_types": { 00:16:11.864 "read": true, 00:16:11.864 "write": true, 00:16:11.864 "unmap": true, 00:16:11.864 "write_zeroes": true, 00:16:11.864 "flush": false, 00:16:11.864 "reset": true, 00:16:11.864 "compare": false, 00:16:11.864 "compare_and_write": false, 00:16:11.864 "abort": false, 00:16:11.864 "nvme_admin": false, 00:16:11.864 "nvme_io": false 00:16:11.864 }, 00:16:11.864 "driver_specific": { 00:16:11.864 "lvol": { 00:16:11.864 "lvol_store_uuid": "f44aadd9-b246-4f87-b12f-05b9c3d5e52b", 00:16:11.864 "base_bdev": "nvme0n1", 00:16:11.864 "thin_provision": true, 00:16:11.864 "snapshot": false, 00:16:11.864 "clone": false, 00:16:11.864 "esnap_clone": false 00:16:11.864 } 00:16:11.864 } 00:16:11.864 } 00:16:11.864 ]' 00:16:11.864 16:25:31 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:11.864 16:25:31 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:11.864 16:25:31 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:11.864 16:25:31 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:11.864 16:25:31 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:11.864 16:25:31 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:11.864 16:25:31 -- ftl/common.sh@41 -- # local base_size=5171 00:16:11.864 16:25:31 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:11.864 16:25:31 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:12.123 16:25:31 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:12.123 16:25:31 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:12.123 16:25:31 -- ftl/common.sh@48 -- # get_bdev_size 61843b1c-be6e-49d7-81f7-bb3df42a145b 00:16:12.123 16:25:31 -- common/autotest_common.sh@1367 -- # local bdev_name=61843b1c-be6e-49d7-81f7-bb3df42a145b 00:16:12.123 16:25:31 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:12.123 16:25:31 -- common/autotest_common.sh@1369 -- # local bs 00:16:12.123 16:25:31 -- common/autotest_common.sh@1370 -- # local nb 00:16:12.123 16:25:31 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61843b1c-be6e-49d7-81f7-bb3df42a145b 00:16:12.381 16:25:31 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:12.381 { 00:16:12.381 "name": "61843b1c-be6e-49d7-81f7-bb3df42a145b", 00:16:12.381 "aliases": [ 00:16:12.381 "lvs/nvme0n1p0" 00:16:12.381 ], 00:16:12.381 "product_name": "Logical Volume", 00:16:12.381 "block_size": 4096, 00:16:12.381 "num_blocks": 26476544, 00:16:12.381 "uuid": "61843b1c-be6e-49d7-81f7-bb3df42a145b", 00:16:12.381 "assigned_rate_limits": { 00:16:12.381 "rw_ios_per_sec": 0, 00:16:12.381 "rw_mbytes_per_sec": 0, 00:16:12.381 "r_mbytes_per_sec": 0, 00:16:12.381 "w_mbytes_per_sec": 0 00:16:12.381 }, 00:16:12.381 "claimed": false, 00:16:12.381 "zoned": false, 00:16:12.381 "supported_io_types": { 00:16:12.381 "read": true, 00:16:12.381 "write": true, 00:16:12.381 "unmap": true, 00:16:12.381 "write_zeroes": true, 00:16:12.381 "flush": false, 00:16:12.381 "reset": true, 00:16:12.381 "compare": false, 00:16:12.381 "compare_and_write": false, 00:16:12.381 "abort": false, 00:16:12.381 "nvme_admin": false, 00:16:12.381 "nvme_io": false 00:16:12.381 }, 00:16:12.381 "driver_specific": { 00:16:12.381 "lvol": { 00:16:12.381 "lvol_store_uuid": "f44aadd9-b246-4f87-b12f-05b9c3d5e52b", 00:16:12.381 "base_bdev": "nvme0n1", 00:16:12.381 "thin_provision": true, 00:16:12.381 "snapshot": false, 00:16:12.381 "clone": false, 00:16:12.381 "esnap_clone": false 00:16:12.381 } 00:16:12.381 } 00:16:12.381 } 00:16:12.381 ]' 00:16:12.381 16:25:31 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:12.381 16:25:31 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:12.381 16:25:31 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:12.381 16:25:31 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:12.381 16:25:31 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:12.381 16:25:31 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:12.381 16:25:31 -- ftl/common.sh@48 -- # cache_size=5171 00:16:12.381 16:25:31 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:12.639 16:25:32 -- ftl/bdevperf.sh@24 -- # nv_cache=nvc0n1p0 00:16:12.639 16:25:32 -- ftl/bdevperf.sh@26 -- # get_bdev_size 61843b1c-be6e-49d7-81f7-bb3df42a145b 00:16:12.639 16:25:32 -- common/autotest_common.sh@1367 -- # local bdev_name=61843b1c-be6e-49d7-81f7-bb3df42a145b 00:16:12.639 16:25:32 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:12.639 16:25:32 -- common/autotest_common.sh@1369 -- # local bs 00:16:12.639 16:25:32 -- common/autotest_common.sh@1370 -- # local nb 00:16:12.639 16:25:32 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 61843b1c-be6e-49d7-81f7-bb3df42a145b 00:16:12.639 16:25:32 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:12.639 { 00:16:12.639 "name": "61843b1c-be6e-49d7-81f7-bb3df42a145b", 00:16:12.639 "aliases": [ 00:16:12.639 "lvs/nvme0n1p0" 00:16:12.639 ], 00:16:12.639 "product_name": "Logical Volume", 00:16:12.639 "block_size": 4096, 00:16:12.639 "num_blocks": 26476544, 00:16:12.639 "uuid": "61843b1c-be6e-49d7-81f7-bb3df42a145b", 00:16:12.639 "assigned_rate_limits": { 00:16:12.639 "rw_ios_per_sec": 0, 00:16:12.639 "rw_mbytes_per_sec": 0, 00:16:12.639 "r_mbytes_per_sec": 0, 00:16:12.639 "w_mbytes_per_sec": 0 00:16:12.639 }, 00:16:12.639 "claimed": false, 00:16:12.639 "zoned": false, 00:16:12.639 "supported_io_types": { 00:16:12.639 "read": true, 00:16:12.639 "write": true, 00:16:12.639 "unmap": true, 00:16:12.639 "write_zeroes": true, 00:16:12.639 "flush": false, 00:16:12.639 "reset": true, 00:16:12.639 "compare": false, 00:16:12.639 "compare_and_write": false, 00:16:12.639 "abort": false, 00:16:12.639 "nvme_admin": false, 00:16:12.639 "nvme_io": false 00:16:12.639 }, 00:16:12.639 "driver_specific": { 00:16:12.639 "lvol": { 00:16:12.639 "lvol_store_uuid": "f44aadd9-b246-4f87-b12f-05b9c3d5e52b", 00:16:12.639 "base_bdev": "nvme0n1", 00:16:12.639 "thin_provision": true, 00:16:12.639 "snapshot": false, 00:16:12.639 "clone": false, 00:16:12.639 "esnap_clone": false 00:16:12.639 } 00:16:12.639 } 00:16:12.639 } 00:16:12.639 ]' 00:16:12.639 16:25:32 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:12.899 16:25:32 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:12.899 16:25:32 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:12.899 16:25:32 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:12.899 16:25:32 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:12.899 16:25:32 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:12.899 16:25:32 -- ftl/bdevperf.sh@26 -- # l2p_dram_size_mb=20 00:16:12.899 16:25:32 -- ftl/bdevperf.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 61843b1c-be6e-49d7-81f7-bb3df42a145b -c nvc0n1p0 --l2p_dram_limit 20 00:16:12.899 [2024-11-09 16:25:32.615191] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.899 [2024-11-09 16:25:32.615243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:12.899 [2024-11-09 16:25:32.615263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:12.899 [2024-11-09 16:25:32.615272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.899 [2024-11-09 16:25:32.615332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.899 [2024-11-09 16:25:32.615344] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:12.899 [2024-11-09 16:25:32.615357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:12.899 [2024-11-09 16:25:32.615370] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.899 [2024-11-09 16:25:32.615394] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:12.899 [2024-11-09 16:25:32.617824] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:12.899 [2024-11-09 16:25:32.617851] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.899 [2024-11-09 16:25:32.617858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:12.899 [2024-11-09 16:25:32.617867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.462 ms 00:16:12.899 [2024-11-09 16:25:32.617874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.899 [2024-11-09 16:25:32.617898] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0054a8fa-f361-40e6-8bee-9d8fde62bcae 00:16:12.899 [2024-11-09 16:25:32.619584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.899 [2024-11-09 16:25:32.619722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:12.899 [2024-11-09 16:25:32.619761] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:12.899 [2024-11-09 16:25:32.619790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.899 [2024-11-09 16:25:32.629575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.899 [2024-11-09 16:25:32.629644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:12.899 [2024-11-09 16:25:32.629656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.521 ms 00:16:12.899 [2024-11-09 16:25:32.629666] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.899 [2024-11-09 16:25:32.629756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.899 [2024-11-09 16:25:32.629769] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:12.899 [2024-11-09 16:25:32.629778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:16:12.899 [2024-11-09 16:25:32.629799] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.899 [2024-11-09 16:25:32.629858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.899 [2024-11-09 16:25:32.629871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:12.899 [2024-11-09 16:25:32.629881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:16:12.899 [2024-11-09 16:25:32.629890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.899 [2024-11-09 16:25:32.629914] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:12.899 [2024-11-09 16:25:32.634010] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.899 [2024-11-09 16:25:32.634040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:12.899 [2024-11-09 16:25:32.634052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.101 ms 00:16:12.899 [2024-11-09 16:25:32.634060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.899 [2024-11-09 16:25:32.634097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.899 [2024-11-09 16:25:32.634105] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:12.899 [2024-11-09 16:25:32.634115] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:16:12.899 [2024-11-09 16:25:32.634123] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.899 [2024-11-09 16:25:32.634152] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:12.899 [2024-11-09 16:25:32.634298] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:12.899 [2024-11-09 16:25:32.634316] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:12.899 [2024-11-09 16:25:32.634327] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:12.899 [2024-11-09 16:25:32.634340] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:12.899 [2024-11-09 16:25:32.634350] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:12.899 [2024-11-09 16:25:32.634360] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:12.899 [2024-11-09 16:25:32.634368] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:12.899 [2024-11-09 16:25:32.634381] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:12.899 [2024-11-09 16:25:32.634389] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:12.899 [2024-11-09 16:25:32.634398] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.899 [2024-11-09 16:25:32.634407] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:12.899 [2024-11-09 16:25:32.634416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.248 ms 00:16:12.899 [2024-11-09 16:25:32.634423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.899 [2024-11-09 16:25:32.634485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.899 [2024-11-09 16:25:32.634494] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:12.899 [2024-11-09 16:25:32.634503] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:12.899 [2024-11-09 16:25:32.634510] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.899 [2024-11-09 16:25:32.634582] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:12.899 [2024-11-09 16:25:32.634591] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:12.899 [2024-11-09 16:25:32.634603] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:12.899 [2024-11-09 16:25:32.634616] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.899 [2024-11-09 16:25:32.634626] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:12.899 [2024-11-09 16:25:32.634632] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:12.899 [2024-11-09 16:25:32.634641] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:12.899 [2024-11-09 16:25:32.634648] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:12.899 [2024-11-09 16:25:32.634658] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:12.899 [2024-11-09 16:25:32.634665] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:12.899 [2024-11-09 16:25:32.634675] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:12.899 [2024-11-09 16:25:32.634683] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:12.899 [2024-11-09 16:25:32.634692] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:12.900 [2024-11-09 16:25:32.634703] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:12.900 [2024-11-09 16:25:32.634711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:16:12.900 [2024-11-09 16:25:32.634719] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.900 [2024-11-09 16:25:32.634729] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:12.900 [2024-11-09 16:25:32.634736] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:16:12.900 [2024-11-09 16:25:32.634744] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.900 [2024-11-09 16:25:32.634751] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:12.900 [2024-11-09 16:25:32.634759] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:16:12.900 [2024-11-09 16:25:32.634765] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:12.900 [2024-11-09 16:25:32.634774] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:12.900 [2024-11-09 16:25:32.634782] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:12.900 [2024-11-09 16:25:32.634790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:12.900 [2024-11-09 16:25:32.634796] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:12.900 [2024-11-09 16:25:32.634805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:16:12.900 [2024-11-09 16:25:32.634811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:12.900 [2024-11-09 16:25:32.634820] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:12.900 [2024-11-09 16:25:32.634827] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:12.900 [2024-11-09 16:25:32.634835] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:12.900 [2024-11-09 16:25:32.634842] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:12.900 [2024-11-09 16:25:32.634853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:16:12.900 [2024-11-09 16:25:32.634860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:12.900 [2024-11-09 16:25:32.634868] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:12.900 [2024-11-09 16:25:32.634876] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:12.900 [2024-11-09 16:25:32.634885] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:12.900 [2024-11-09 16:25:32.634891] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:12.900 [2024-11-09 16:25:32.634900] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:16:12.900 [2024-11-09 16:25:32.634907] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:12.900 [2024-11-09 16:25:32.634915] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:12.900 [2024-11-09 16:25:32.634923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:12.900 [2024-11-09 16:25:32.634932] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:12.900 [2024-11-09 16:25:32.634940] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:12.900 [2024-11-09 16:25:32.634949] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:12.900 [2024-11-09 16:25:32.634957] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:12.900 [2024-11-09 16:25:32.634965] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:12.900 [2024-11-09 16:25:32.634973] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:12.900 [2024-11-09 16:25:32.634983] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:12.900 [2024-11-09 16:25:32.634991] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:12.900 [2024-11-09 16:25:32.635000] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:12.900 [2024-11-09 16:25:32.635010] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:12.900 [2024-11-09 16:25:32.635022] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:12.900 [2024-11-09 16:25:32.635030] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:16:12.900 [2024-11-09 16:25:32.635039] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:16:12.900 [2024-11-09 16:25:32.635046] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:16:12.900 [2024-11-09 16:25:32.635055] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:16:12.900 [2024-11-09 16:25:32.635062] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:16:12.900 [2024-11-09 16:25:32.635070] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:16:12.900 [2024-11-09 16:25:32.635078] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:16:12.900 [2024-11-09 16:25:32.635086] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:16:12.900 [2024-11-09 16:25:32.635093] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:16:12.900 [2024-11-09 16:25:32.635103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:16:12.900 [2024-11-09 16:25:32.635110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:16:12.900 [2024-11-09 16:25:32.635121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:16:12.900 [2024-11-09 16:25:32.635129] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:12.900 [2024-11-09 16:25:32.635139] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:12.900 [2024-11-09 16:25:32.635147] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:12.900 [2024-11-09 16:25:32.635155] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:12.900 [2024-11-09 16:25:32.635162] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:12.900 [2024-11-09 16:25:32.635171] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:12.900 [2024-11-09 16:25:32.635179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.900 [2024-11-09 16:25:32.635188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:12.900 [2024-11-09 16:25:32.635196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:16:12.900 [2024-11-09 16:25:32.635205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.900 [2024-11-09 16:25:32.652065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.900 [2024-11-09 16:25:32.652201] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:12.900 [2024-11-09 16:25:32.652277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.614 ms 00:16:12.900 [2024-11-09 16:25:32.652306] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:12.900 [2024-11-09 16:25:32.652407] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:12.900 [2024-11-09 16:25:32.652437] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:12.900 [2024-11-09 16:25:32.652457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:12.900 [2024-11-09 16:25:32.652478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.162 [2024-11-09 16:25:32.698498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.162 [2024-11-09 16:25:32.698646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:13.162 [2024-11-09 16:25:32.698706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.960 ms 00:16:13.162 [2024-11-09 16:25:32.698734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.162 [2024-11-09 16:25:32.698779] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.162 [2024-11-09 16:25:32.698806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:13.162 [2024-11-09 16:25:32.698826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:13.162 [2024-11-09 16:25:32.698847] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.162 [2024-11-09 16:25:32.699375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.162 [2024-11-09 16:25:32.699820] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:13.162 [2024-11-09 16:25:32.700044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.467 ms 00:16:13.162 [2024-11-09 16:25:32.700190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.162 [2024-11-09 16:25:32.700763] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.162 [2024-11-09 16:25:32.700828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:13.162 [2024-11-09 16:25:32.700862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:16:13.162 [2024-11-09 16:25:32.700886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.162 [2024-11-09 16:25:32.721527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.162 [2024-11-09 16:25:32.721560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:13.162 [2024-11-09 16:25:32.721573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.602 ms 00:16:13.162 [2024-11-09 16:25:32.721583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.162 [2024-11-09 16:25:32.734735] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:13.162 [2024-11-09 16:25:32.741144] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.162 [2024-11-09 16:25:32.741181] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:13.162 [2024-11-09 16:25:32.741194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.479 ms 00:16:13.162 [2024-11-09 16:25:32.741202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.162 [2024-11-09 16:25:32.831832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:13.162 [2024-11-09 16:25:32.831912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:13.162 [2024-11-09 16:25:32.831930] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.583 ms 00:16:13.162 [2024-11-09 16:25:32.831939] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:13.162 [2024-11-09 16:25:32.832003] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:13.162 [2024-11-09 16:25:32.832016] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:17.400 [2024-11-09 16:25:36.671550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.400 [2024-11-09 16:25:36.671823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:17.400 [2024-11-09 16:25:36.671862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3839.526 ms 00:16:17.400 [2024-11-09 16:25:36.671873] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.400 [2024-11-09 16:25:36.672130] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.400 [2024-11-09 16:25:36.672144] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:17.400 [2024-11-09 16:25:36.672157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:16:17.400 [2024-11-09 16:25:36.672166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.400 [2024-11-09 16:25:36.700160] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.400 [2024-11-09 16:25:36.700218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:17.400 [2024-11-09 16:25:36.700256] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.898 ms 00:16:17.400 [2024-11-09 16:25:36.700269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.400 [2024-11-09 16:25:36.727140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.400 [2024-11-09 16:25:36.727359] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:17.400 [2024-11-09 16:25:36.727393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.805 ms 00:16:17.400 [2024-11-09 16:25:36.727402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.400 [2024-11-09 16:25:36.727777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.400 [2024-11-09 16:25:36.727793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:17.400 [2024-11-09 16:25:36.727805] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:16:17.400 [2024-11-09 16:25:36.727813] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.400 [2024-11-09 16:25:36.803954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.400 [2024-11-09 16:25:36.804006] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:17.400 [2024-11-09 16:25:36.804024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.077 ms 00:16:17.400 [2024-11-09 16:25:36.804033] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.400 [2024-11-09 16:25:36.832732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.400 [2024-11-09 16:25:36.832792] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:17.400 [2024-11-09 16:25:36.832810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.635 ms 00:16:17.400 [2024-11-09 16:25:36.832819] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.400 [2024-11-09 16:25:36.834523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.400 [2024-11-09 16:25:36.834723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:17.400 [2024-11-09 16:25:36.834752] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.647 ms 00:16:17.400 [2024-11-09 16:25:36.834764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.400 [2024-11-09 16:25:36.862577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.400 [2024-11-09 16:25:36.862631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:17.400 [2024-11-09 16:25:36.862648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.740 ms 00:16:17.400 [2024-11-09 16:25:36.862655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.400 [2024-11-09 16:25:36.862714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.400 [2024-11-09 16:25:36.862724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:17.400 [2024-11-09 16:25:36.862739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:16:17.400 [2024-11-09 16:25:36.862747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.400 [2024-11-09 16:25:36.862858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:17.400 [2024-11-09 16:25:36.862869] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:17.400 [2024-11-09 16:25:36.862879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:16:17.400 [2024-11-09 16:25:36.862888] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:17.400 [2024-11-09 16:25:36.864078] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4248.357 ms, result 0 00:16:17.400 { 00:16:17.400 "name": "ftl0", 00:16:17.400 "uuid": "0054a8fa-f361-40e6-8bee-9d8fde62bcae" 00:16:17.400 } 00:16:17.400 16:25:36 -- ftl/bdevperf.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:17.400 16:25:36 -- ftl/bdevperf.sh@29 -- # jq -r .name 00:16:17.400 16:25:36 -- ftl/bdevperf.sh@29 -- # grep -qw ftl0 00:16:17.400 16:25:37 -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:17.661 [2024-11-09 16:25:37.176166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:17.661 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:17.661 Zero copy mechanism will not be used. 00:16:17.661 Running I/O for 4 seconds... 00:16:21.872 00:16:21.872 Latency(us) 00:16:21.872 [2024-11-09T16:25:41.643Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:21.873 [2024-11-09T16:25:41.643Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:16:21.873 ftl0 : 4.00 1012.32 67.22 0.00 0.00 1039.51 201.65 1928.27 00:16:21.873 [2024-11-09T16:25:41.643Z] =================================================================================================================== 00:16:21.873 [2024-11-09T16:25:41.643Z] Total : 1012.32 67.22 0.00 0.00 1039.51 201.65 1928.27 00:16:21.873 0 00:16:21.873 [2024-11-09 16:25:41.186445] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:21.873 16:25:41 -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:16:21.873 [2024-11-09 16:25:41.302604] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:21.873 Running I/O for 4 seconds... 00:16:26.083 00:16:26.083 Latency(us) 00:16:26.083 [2024-11-09T16:25:45.853Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:26.083 [2024-11-09T16:25:45.853Z] Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:16:26.083 ftl0 : 4.04 5111.97 19.97 0.00 0.00 24910.06 345.01 52025.50 00:16:26.083 [2024-11-09T16:25:45.853Z] =================================================================================================================== 00:16:26.083 [2024-11-09T16:25:45.853Z] Total : 5111.97 19.97 0.00 0.00 24910.06 0.00 52025.50 00:16:26.083 [2024-11-09 16:25:45.352624] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:26.083 0 00:16:26.083 16:25:45 -- ftl/bdevperf.sh@33 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:16:26.083 [2024-11-09 16:25:45.473788] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:26.083 Running I/O for 4 seconds... 00:16:30.299 00:16:30.299 Latency(us) 00:16:30.299 [2024-11-09T16:25:50.069Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.299 [2024-11-09T16:25:50.069Z] Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:30.299 Verification LBA range: start 0x0 length 0x1400000 00:16:30.299 ftl0 : 4.01 8305.87 32.44 0.00 0.00 15376.52 72.07 100018.02 00:16:30.299 [2024-11-09T16:25:50.069Z] =================================================================================================================== 00:16:30.299 [2024-11-09T16:25:50.069Z] Total : 8305.87 32.44 0.00 0.00 15376.52 0.00 100018.02 00:16:30.299 [2024-11-09 16:25:49.498700] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:16:30.299 0 00:16:30.299 16:25:49 -- ftl/bdevperf.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:16:30.299 [2024-11-09 16:25:49.701465] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.299 [2024-11-09 16:25:49.701695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:30.299 [2024-11-09 16:25:49.701725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:30.299 [2024-11-09 16:25:49.701734] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.299 [2024-11-09 16:25:49.701766] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:30.299 [2024-11-09 16:25:49.704809] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.299 [2024-11-09 16:25:49.704969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:30.299 [2024-11-09 16:25:49.704991] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.026 ms 00:16:30.299 [2024-11-09 16:25:49.705005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.299 [2024-11-09 16:25:49.708179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.299 [2024-11-09 16:25:49.708366] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:30.299 [2024-11-09 16:25:49.708389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.142 ms 00:16:30.299 [2024-11-09 16:25:49.708400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.300 [2024-11-09 16:25:49.925281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.300 [2024-11-09 16:25:49.925358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:30.300 [2024-11-09 16:25:49.925382] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 216.853 ms 00:16:30.300 [2024-11-09 16:25:49.925394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.300 [2024-11-09 16:25:49.931552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.300 [2024-11-09 16:25:49.931733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:30.300 [2024-11-09 16:25:49.931754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.114 ms 00:16:30.300 [2024-11-09 16:25:49.931764] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.300 [2024-11-09 16:25:49.958989] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.300 [2024-11-09 16:25:49.959171] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:30.300 [2024-11-09 16:25:49.959193] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.158 ms 00:16:30.300 [2024-11-09 16:25:49.959206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.300 [2024-11-09 16:25:49.977671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.300 [2024-11-09 16:25:49.977862] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:30.300 [2024-11-09 16:25:49.977886] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.405 ms 00:16:30.300 [2024-11-09 16:25:49.977898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.300 [2024-11-09 16:25:49.978085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.300 [2024-11-09 16:25:49.978101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:30.300 [2024-11-09 16:25:49.978110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:16:30.300 [2024-11-09 16:25:49.978121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.300 [2024-11-09 16:25:50.004525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.300 [2024-11-09 16:25:50.004688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:30.300 [2024-11-09 16:25:50.004707] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.386 ms 00:16:30.300 [2024-11-09 16:25:50.004717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.300 [2024-11-09 16:25:50.029991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.300 [2024-11-09 16:25:50.030044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:30.300 [2024-11-09 16:25:50.030055] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.238 ms 00:16:30.300 [2024-11-09 16:25:50.030068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.300 [2024-11-09 16:25:50.055294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.300 [2024-11-09 16:25:50.055464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:30.300 [2024-11-09 16:25:50.055483] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.182 ms 00:16:30.300 [2024-11-09 16:25:50.055493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.562 [2024-11-09 16:25:50.080841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.563 [2024-11-09 16:25:50.080892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:30.563 [2024-11-09 16:25:50.080904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.259 ms 00:16:30.563 [2024-11-09 16:25:50.080913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.563 [2024-11-09 16:25:50.080957] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:30.563 [2024-11-09 16:25:50.080975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.080986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.080997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:30.563 [2024-11-09 16:25:50.081698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:30.564 [2024-11-09 16:25:50.081951] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:30.564 [2024-11-09 16:25:50.081959] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0054a8fa-f361-40e6-8bee-9d8fde62bcae 00:16:30.564 [2024-11-09 16:25:50.081971] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:30.564 [2024-11-09 16:25:50.081979] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:30.564 [2024-11-09 16:25:50.081988] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:30.564 [2024-11-09 16:25:50.081996] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:30.564 [2024-11-09 16:25:50.082007] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:30.564 [2024-11-09 16:25:50.082018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:30.564 [2024-11-09 16:25:50.082027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:30.564 [2024-11-09 16:25:50.082033] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:30.564 [2024-11-09 16:25:50.082042] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:30.564 [2024-11-09 16:25:50.082050] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.564 [2024-11-09 16:25:50.082060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:30.564 [2024-11-09 16:25:50.082068] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.095 ms 00:16:30.564 [2024-11-09 16:25:50.082078] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.095985] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.564 [2024-11-09 16:25:50.096031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:30.564 [2024-11-09 16:25:50.096043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.857 ms 00:16:30.564 [2024-11-09 16:25:50.096059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.096301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:30.564 [2024-11-09 16:25:50.096314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:30.564 [2024-11-09 16:25:50.096323] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.220 ms 00:16:30.564 [2024-11-09 16:25:50.096332] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.137347] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.564 [2024-11-09 16:25:50.137398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:30.564 [2024-11-09 16:25:50.137414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.564 [2024-11-09 16:25:50.137424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.137493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.564 [2024-11-09 16:25:50.137504] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:30.564 [2024-11-09 16:25:50.137513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.564 [2024-11-09 16:25:50.137523] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.137596] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.564 [2024-11-09 16:25:50.137610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:30.564 [2024-11-09 16:25:50.137618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.564 [2024-11-09 16:25:50.137633] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.137650] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.564 [2024-11-09 16:25:50.137660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:30.564 [2024-11-09 16:25:50.137669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.564 [2024-11-09 16:25:50.137679] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.218148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.564 [2024-11-09 16:25:50.218405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:30.564 [2024-11-09 16:25:50.218429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.564 [2024-11-09 16:25:50.218445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.250003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.564 [2024-11-09 16:25:50.250180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:30.564 [2024-11-09 16:25:50.250199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.564 [2024-11-09 16:25:50.250210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.250310] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.564 [2024-11-09 16:25:50.250325] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:30.564 [2024-11-09 16:25:50.250334] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.564 [2024-11-09 16:25:50.250348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.250396] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.564 [2024-11-09 16:25:50.250408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:30.564 [2024-11-09 16:25:50.250417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.564 [2024-11-09 16:25:50.250427] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.250527] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.564 [2024-11-09 16:25:50.250540] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:30.564 [2024-11-09 16:25:50.250548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.564 [2024-11-09 16:25:50.250558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.250588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.564 [2024-11-09 16:25:50.250603] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:30.564 [2024-11-09 16:25:50.250611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.564 [2024-11-09 16:25:50.250621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.250662] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.564 [2024-11-09 16:25:50.250674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:30.564 [2024-11-09 16:25:50.250682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.564 [2024-11-09 16:25:50.250694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.250746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:30.564 [2024-11-09 16:25:50.250759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:30.564 [2024-11-09 16:25:50.250769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:30.564 [2024-11-09 16:25:50.250778] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:30.564 [2024-11-09 16:25:50.250922] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 549.412 ms, result 0 00:16:30.565 true 00:16:30.565 16:25:50 -- ftl/bdevperf.sh@37 -- # killprocess 71683 00:16:30.565 16:25:50 -- common/autotest_common.sh@936 -- # '[' -z 71683 ']' 00:16:30.565 16:25:50 -- common/autotest_common.sh@940 -- # kill -0 71683 00:16:30.565 16:25:50 -- common/autotest_common.sh@941 -- # uname 00:16:30.565 16:25:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:30.565 16:25:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71683 00:16:30.565 16:25:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:30.565 16:25:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:30.565 16:25:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71683' 00:16:30.565 killing process with pid 71683 00:16:30.565 16:25:50 -- common/autotest_common.sh@955 -- # kill 71683 00:16:30.565 Received shutdown signal, test time was about 4.000000 seconds 00:16:30.565 00:16:30.565 Latency(us) 00:16:30.565 [2024-11-09T16:25:50.335Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:30.565 [2024-11-09T16:25:50.335Z] =================================================================================================================== 00:16:30.565 [2024-11-09T16:25:50.335Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:30.565 16:25:50 -- common/autotest_common.sh@960 -- # wait 71683 00:16:33.109 16:25:52 -- ftl/bdevperf.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:16:33.109 16:25:52 -- ftl/bdevperf.sh@39 -- # timing_exit '/home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0' 00:16:33.109 16:25:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:33.109 16:25:52 -- common/autotest_common.sh@10 -- # set +x 00:16:33.109 Remove shared memory files 00:16:33.109 16:25:52 -- ftl/bdevperf.sh@41 -- # remove_shm 00:16:33.109 16:25:52 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:33.109 16:25:52 -- ftl/common.sh@205 -- # rm -f rm -f 00:16:33.109 16:25:52 -- ftl/common.sh@206 -- # rm -f rm -f 00:16:33.109 16:25:52 -- ftl/common.sh@207 -- # rm -f rm -f 00:16:33.109 16:25:52 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:33.109 16:25:52 -- ftl/common.sh@209 -- # rm -f rm -f 00:16:33.109 ************************************ 00:16:33.109 END TEST ftl_bdevperf 00:16:33.109 ************************************ 00:16:33.109 00:16:33.109 real 0m23.708s 00:16:33.109 user 0m25.999s 00:16:33.109 sys 0m1.052s 00:16:33.109 16:25:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:33.109 16:25:52 -- common/autotest_common.sh@10 -- # set +x 00:16:33.109 16:25:52 -- ftl/ftl.sh@76 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:33.109 16:25:52 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:16:33.109 16:25:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:33.109 16:25:52 -- common/autotest_common.sh@10 -- # set +x 00:16:33.109 ************************************ 00:16:33.109 START TEST ftl_trim 00:16:33.109 ************************************ 00:16:33.109 16:25:52 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:07.0 0000:00:06.0 00:16:33.109 * Looking for test storage... 00:16:33.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.109 16:25:52 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:33.109 16:25:52 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:33.109 16:25:52 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:33.109 16:25:52 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:33.109 16:25:52 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:33.109 16:25:52 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:33.109 16:25:52 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:33.109 16:25:52 -- scripts/common.sh@335 -- # IFS=.-: 00:16:33.109 16:25:52 -- scripts/common.sh@335 -- # read -ra ver1 00:16:33.109 16:25:52 -- scripts/common.sh@336 -- # IFS=.-: 00:16:33.109 16:25:52 -- scripts/common.sh@336 -- # read -ra ver2 00:16:33.109 16:25:52 -- scripts/common.sh@337 -- # local 'op=<' 00:16:33.109 16:25:52 -- scripts/common.sh@339 -- # ver1_l=2 00:16:33.109 16:25:52 -- scripts/common.sh@340 -- # ver2_l=1 00:16:33.109 16:25:52 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:33.109 16:25:52 -- scripts/common.sh@343 -- # case "$op" in 00:16:33.109 16:25:52 -- scripts/common.sh@344 -- # : 1 00:16:33.109 16:25:52 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:33.109 16:25:52 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:33.109 16:25:52 -- scripts/common.sh@364 -- # decimal 1 00:16:33.109 16:25:52 -- scripts/common.sh@352 -- # local d=1 00:16:33.109 16:25:52 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:33.109 16:25:52 -- scripts/common.sh@354 -- # echo 1 00:16:33.109 16:25:52 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:33.109 16:25:52 -- scripts/common.sh@365 -- # decimal 2 00:16:33.109 16:25:52 -- scripts/common.sh@352 -- # local d=2 00:16:33.109 16:25:52 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:33.109 16:25:52 -- scripts/common.sh@354 -- # echo 2 00:16:33.109 16:25:52 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:33.109 16:25:52 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:33.109 16:25:52 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:33.109 16:25:52 -- scripts/common.sh@367 -- # return 0 00:16:33.109 16:25:52 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:33.109 16:25:52 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:33.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.109 --rc genhtml_branch_coverage=1 00:16:33.109 --rc genhtml_function_coverage=1 00:16:33.109 --rc genhtml_legend=1 00:16:33.109 --rc geninfo_all_blocks=1 00:16:33.109 --rc geninfo_unexecuted_blocks=1 00:16:33.109 00:16:33.109 ' 00:16:33.109 16:25:52 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:33.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.109 --rc genhtml_branch_coverage=1 00:16:33.109 --rc genhtml_function_coverage=1 00:16:33.109 --rc genhtml_legend=1 00:16:33.109 --rc geninfo_all_blocks=1 00:16:33.109 --rc geninfo_unexecuted_blocks=1 00:16:33.109 00:16:33.109 ' 00:16:33.109 16:25:52 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:33.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.109 --rc genhtml_branch_coverage=1 00:16:33.109 --rc genhtml_function_coverage=1 00:16:33.109 --rc genhtml_legend=1 00:16:33.109 --rc geninfo_all_blocks=1 00:16:33.109 --rc geninfo_unexecuted_blocks=1 00:16:33.109 00:16:33.109 ' 00:16:33.109 16:25:52 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:33.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:33.109 --rc genhtml_branch_coverage=1 00:16:33.109 --rc genhtml_function_coverage=1 00:16:33.109 --rc genhtml_legend=1 00:16:33.109 --rc geninfo_all_blocks=1 00:16:33.109 --rc geninfo_unexecuted_blocks=1 00:16:33.109 00:16:33.109 ' 00:16:33.109 16:25:52 -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:33.109 16:25:52 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:16:33.109 16:25:52 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.109 16:25:52 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:33.109 16:25:52 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:33.109 16:25:52 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:33.109 16:25:52 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.109 16:25:52 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:33.109 16:25:52 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:33.109 16:25:52 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.109 16:25:52 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.109 16:25:52 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:33.109 16:25:52 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:33.109 16:25:52 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:33.109 16:25:52 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:33.109 16:25:52 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:33.110 16:25:52 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:33.110 16:25:52 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.110 16:25:52 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:33.110 16:25:52 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:33.110 16:25:52 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:33.110 16:25:52 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:33.110 16:25:52 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:33.110 16:25:52 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:33.110 16:25:52 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:33.110 16:25:52 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:33.110 16:25:52 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:33.110 16:25:52 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:33.110 16:25:52 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:33.110 16:25:52 -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.110 16:25:52 -- ftl/trim.sh@23 -- # device=0000:00:07.0 00:16:33.110 16:25:52 -- ftl/trim.sh@24 -- # cache_device=0000:00:06.0 00:16:33.110 16:25:52 -- ftl/trim.sh@25 -- # timeout=240 00:16:33.110 16:25:52 -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:16:33.110 16:25:52 -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:16:33.110 16:25:52 -- ftl/trim.sh@29 -- # [[ y != y ]] 00:16:33.110 16:25:52 -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:16:33.110 16:25:52 -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:16:33.110 16:25:52 -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:33.110 16:25:52 -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:33.110 16:25:52 -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:33.110 16:25:52 -- ftl/trim.sh@40 -- # svcpid=72070 00:16:33.110 16:25:52 -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:16:33.110 16:25:52 -- ftl/trim.sh@41 -- # waitforlisten 72070 00:16:33.110 16:25:52 -- common/autotest_common.sh@829 -- # '[' -z 72070 ']' 00:16:33.110 16:25:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.110 16:25:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.110 16:25:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.110 16:25:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.110 16:25:52 -- common/autotest_common.sh@10 -- # set +x 00:16:33.110 [2024-11-09 16:25:52.780619] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:33.110 [2024-11-09 16:25:52.780970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72070 ] 00:16:33.371 [2024-11-09 16:25:52.933664] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:33.632 [2024-11-09 16:25:53.152957] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:33.632 [2024-11-09 16:25:53.153802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:33.632 [2024-11-09 16:25:53.154144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:33.632 [2024-11-09 16:25:53.154270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.574 16:25:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.574 16:25:54 -- common/autotest_common.sh@862 -- # return 0 00:16:34.574 16:25:54 -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:16:34.574 16:25:54 -- ftl/common.sh@54 -- # local name=nvme0 00:16:34.574 16:25:54 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:16:34.574 16:25:54 -- ftl/common.sh@56 -- # local size=103424 00:16:34.574 16:25:54 -- ftl/common.sh@59 -- # local base_bdev 00:16:34.574 16:25:54 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:16:35.147 16:25:54 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:35.147 16:25:54 -- ftl/common.sh@62 -- # local base_size 00:16:35.147 16:25:54 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:35.147 16:25:54 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:16:35.147 16:25:54 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:35.147 16:25:54 -- common/autotest_common.sh@1369 -- # local bs 00:16:35.147 16:25:54 -- common/autotest_common.sh@1370 -- # local nb 00:16:35.147 16:25:54 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:35.147 16:25:54 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:35.147 { 00:16:35.147 "name": "nvme0n1", 00:16:35.147 "aliases": [ 00:16:35.147 "cb7ab522-ef54-4372-8a1e-011027bd0ff4" 00:16:35.147 ], 00:16:35.147 "product_name": "NVMe disk", 00:16:35.147 "block_size": 4096, 00:16:35.147 "num_blocks": 1310720, 00:16:35.147 "uuid": "cb7ab522-ef54-4372-8a1e-011027bd0ff4", 00:16:35.147 "assigned_rate_limits": { 00:16:35.147 "rw_ios_per_sec": 0, 00:16:35.147 "rw_mbytes_per_sec": 0, 00:16:35.147 "r_mbytes_per_sec": 0, 00:16:35.147 "w_mbytes_per_sec": 0 00:16:35.147 }, 00:16:35.147 "claimed": true, 00:16:35.147 "claim_type": "read_many_write_one", 00:16:35.147 "zoned": false, 00:16:35.147 "supported_io_types": { 00:16:35.147 "read": true, 00:16:35.147 "write": true, 00:16:35.147 "unmap": true, 00:16:35.147 "write_zeroes": true, 00:16:35.147 "flush": true, 00:16:35.147 "reset": true, 00:16:35.147 "compare": true, 00:16:35.147 "compare_and_write": false, 00:16:35.147 "abort": true, 00:16:35.147 "nvme_admin": true, 00:16:35.147 "nvme_io": true 00:16:35.147 }, 00:16:35.147 "driver_specific": { 00:16:35.147 "nvme": [ 00:16:35.147 { 00:16:35.147 "pci_address": "0000:00:07.0", 00:16:35.147 "trid": { 00:16:35.147 "trtype": "PCIe", 00:16:35.147 "traddr": "0000:00:07.0" 00:16:35.147 }, 00:16:35.147 "ctrlr_data": { 00:16:35.147 "cntlid": 0, 00:16:35.147 "vendor_id": "0x1b36", 00:16:35.147 "model_number": "QEMU NVMe Ctrl", 00:16:35.147 "serial_number": "12341", 00:16:35.147 "firmware_revision": "8.0.0", 00:16:35.147 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:35.147 "oacs": { 00:16:35.147 "security": 0, 00:16:35.147 "format": 1, 00:16:35.147 "firmware": 0, 00:16:35.147 "ns_manage": 1 00:16:35.147 }, 00:16:35.147 "multi_ctrlr": false, 00:16:35.147 "ana_reporting": false 00:16:35.147 }, 00:16:35.147 "vs": { 00:16:35.147 "nvme_version": "1.4" 00:16:35.147 }, 00:16:35.147 "ns_data": { 00:16:35.147 "id": 1, 00:16:35.147 "can_share": false 00:16:35.147 } 00:16:35.147 } 00:16:35.147 ], 00:16:35.147 "mp_policy": "active_passive" 00:16:35.147 } 00:16:35.147 } 00:16:35.147 ]' 00:16:35.147 16:25:54 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:35.147 16:25:54 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:35.147 16:25:54 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:35.147 16:25:54 -- common/autotest_common.sh@1373 -- # nb=1310720 00:16:35.147 16:25:54 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:16:35.147 16:25:54 -- common/autotest_common.sh@1377 -- # echo 5120 00:16:35.147 16:25:54 -- ftl/common.sh@63 -- # base_size=5120 00:16:35.147 16:25:54 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:35.147 16:25:54 -- ftl/common.sh@67 -- # clear_lvols 00:16:35.147 16:25:54 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:35.147 16:25:54 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:35.408 16:25:55 -- ftl/common.sh@28 -- # stores=f44aadd9-b246-4f87-b12f-05b9c3d5e52b 00:16:35.408 16:25:55 -- ftl/common.sh@29 -- # for lvs in $stores 00:16:35.408 16:25:55 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f44aadd9-b246-4f87-b12f-05b9c3d5e52b 00:16:35.669 16:25:55 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:35.930 16:25:55 -- ftl/common.sh@68 -- # lvs=271af190-802c-4120-87b6-89f4f0e09460 00:16:35.930 16:25:55 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 271af190-802c-4120-87b6-89f4f0e09460 00:16:35.930 16:25:55 -- ftl/trim.sh@43 -- # split_bdev=e85f28f6-7778-47d9-add1-39349489ea2b 00:16:35.930 16:25:55 -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:06.0 e85f28f6-7778-47d9-add1-39349489ea2b 00:16:35.930 16:25:55 -- ftl/common.sh@35 -- # local name=nvc0 00:16:35.930 16:25:55 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:16:35.930 16:25:55 -- ftl/common.sh@37 -- # local base_bdev=e85f28f6-7778-47d9-add1-39349489ea2b 00:16:35.930 16:25:55 -- ftl/common.sh@38 -- # local cache_size= 00:16:35.930 16:25:55 -- ftl/common.sh@41 -- # get_bdev_size e85f28f6-7778-47d9-add1-39349489ea2b 00:16:35.930 16:25:55 -- common/autotest_common.sh@1367 -- # local bdev_name=e85f28f6-7778-47d9-add1-39349489ea2b 00:16:35.930 16:25:55 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:35.930 16:25:55 -- common/autotest_common.sh@1369 -- # local bs 00:16:35.930 16:25:55 -- common/autotest_common.sh@1370 -- # local nb 00:16:35.930 16:25:55 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e85f28f6-7778-47d9-add1-39349489ea2b 00:16:36.191 16:25:55 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:36.191 { 00:16:36.191 "name": "e85f28f6-7778-47d9-add1-39349489ea2b", 00:16:36.191 "aliases": [ 00:16:36.191 "lvs/nvme0n1p0" 00:16:36.191 ], 00:16:36.191 "product_name": "Logical Volume", 00:16:36.191 "block_size": 4096, 00:16:36.191 "num_blocks": 26476544, 00:16:36.191 "uuid": "e85f28f6-7778-47d9-add1-39349489ea2b", 00:16:36.191 "assigned_rate_limits": { 00:16:36.191 "rw_ios_per_sec": 0, 00:16:36.191 "rw_mbytes_per_sec": 0, 00:16:36.192 "r_mbytes_per_sec": 0, 00:16:36.192 "w_mbytes_per_sec": 0 00:16:36.192 }, 00:16:36.192 "claimed": false, 00:16:36.192 "zoned": false, 00:16:36.192 "supported_io_types": { 00:16:36.192 "read": true, 00:16:36.192 "write": true, 00:16:36.192 "unmap": true, 00:16:36.192 "write_zeroes": true, 00:16:36.192 "flush": false, 00:16:36.192 "reset": true, 00:16:36.192 "compare": false, 00:16:36.192 "compare_and_write": false, 00:16:36.192 "abort": false, 00:16:36.192 "nvme_admin": false, 00:16:36.192 "nvme_io": false 00:16:36.192 }, 00:16:36.192 "driver_specific": { 00:16:36.192 "lvol": { 00:16:36.192 "lvol_store_uuid": "271af190-802c-4120-87b6-89f4f0e09460", 00:16:36.192 "base_bdev": "nvme0n1", 00:16:36.192 "thin_provision": true, 00:16:36.192 "snapshot": false, 00:16:36.192 "clone": false, 00:16:36.192 "esnap_clone": false 00:16:36.192 } 00:16:36.192 } 00:16:36.192 } 00:16:36.192 ]' 00:16:36.192 16:25:55 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:36.192 16:25:55 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:36.192 16:25:55 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:36.192 16:25:55 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:36.192 16:25:55 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:36.192 16:25:55 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:36.192 16:25:55 -- ftl/common.sh@41 -- # local base_size=5171 00:16:36.192 16:25:55 -- ftl/common.sh@44 -- # local nvc_bdev 00:16:36.192 16:25:55 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:16:36.451 16:25:56 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:36.451 16:25:56 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:36.451 16:25:56 -- ftl/common.sh@48 -- # get_bdev_size e85f28f6-7778-47d9-add1-39349489ea2b 00:16:36.451 16:25:56 -- common/autotest_common.sh@1367 -- # local bdev_name=e85f28f6-7778-47d9-add1-39349489ea2b 00:16:36.451 16:25:56 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:36.451 16:25:56 -- common/autotest_common.sh@1369 -- # local bs 00:16:36.451 16:25:56 -- common/autotest_common.sh@1370 -- # local nb 00:16:36.451 16:25:56 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e85f28f6-7778-47d9-add1-39349489ea2b 00:16:36.709 16:25:56 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:36.709 { 00:16:36.709 "name": "e85f28f6-7778-47d9-add1-39349489ea2b", 00:16:36.709 "aliases": [ 00:16:36.709 "lvs/nvme0n1p0" 00:16:36.709 ], 00:16:36.709 "product_name": "Logical Volume", 00:16:36.709 "block_size": 4096, 00:16:36.709 "num_blocks": 26476544, 00:16:36.709 "uuid": "e85f28f6-7778-47d9-add1-39349489ea2b", 00:16:36.709 "assigned_rate_limits": { 00:16:36.709 "rw_ios_per_sec": 0, 00:16:36.709 "rw_mbytes_per_sec": 0, 00:16:36.709 "r_mbytes_per_sec": 0, 00:16:36.709 "w_mbytes_per_sec": 0 00:16:36.709 }, 00:16:36.709 "claimed": false, 00:16:36.709 "zoned": false, 00:16:36.709 "supported_io_types": { 00:16:36.709 "read": true, 00:16:36.709 "write": true, 00:16:36.709 "unmap": true, 00:16:36.709 "write_zeroes": true, 00:16:36.709 "flush": false, 00:16:36.709 "reset": true, 00:16:36.709 "compare": false, 00:16:36.709 "compare_and_write": false, 00:16:36.709 "abort": false, 00:16:36.709 "nvme_admin": false, 00:16:36.709 "nvme_io": false 00:16:36.709 }, 00:16:36.709 "driver_specific": { 00:16:36.709 "lvol": { 00:16:36.709 "lvol_store_uuid": "271af190-802c-4120-87b6-89f4f0e09460", 00:16:36.709 "base_bdev": "nvme0n1", 00:16:36.709 "thin_provision": true, 00:16:36.709 "snapshot": false, 00:16:36.709 "clone": false, 00:16:36.709 "esnap_clone": false 00:16:36.709 } 00:16:36.709 } 00:16:36.709 } 00:16:36.709 ]' 00:16:36.709 16:25:56 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:36.709 16:25:56 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:36.709 16:25:56 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:36.709 16:25:56 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:36.709 16:25:56 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:36.709 16:25:56 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:36.709 16:25:56 -- ftl/common.sh@48 -- # cache_size=5171 00:16:36.709 16:25:56 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:36.968 16:25:56 -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:16:36.968 16:25:56 -- ftl/trim.sh@46 -- # l2p_percentage=60 00:16:36.968 16:25:56 -- ftl/trim.sh@47 -- # get_bdev_size e85f28f6-7778-47d9-add1-39349489ea2b 00:16:36.968 16:25:56 -- common/autotest_common.sh@1367 -- # local bdev_name=e85f28f6-7778-47d9-add1-39349489ea2b 00:16:36.968 16:25:56 -- common/autotest_common.sh@1368 -- # local bdev_info 00:16:36.968 16:25:56 -- common/autotest_common.sh@1369 -- # local bs 00:16:36.968 16:25:56 -- common/autotest_common.sh@1370 -- # local nb 00:16:36.968 16:25:56 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e85f28f6-7778-47d9-add1-39349489ea2b 00:16:37.226 16:25:56 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:16:37.226 { 00:16:37.226 "name": "e85f28f6-7778-47d9-add1-39349489ea2b", 00:16:37.226 "aliases": [ 00:16:37.226 "lvs/nvme0n1p0" 00:16:37.226 ], 00:16:37.226 "product_name": "Logical Volume", 00:16:37.226 "block_size": 4096, 00:16:37.226 "num_blocks": 26476544, 00:16:37.226 "uuid": "e85f28f6-7778-47d9-add1-39349489ea2b", 00:16:37.226 "assigned_rate_limits": { 00:16:37.226 "rw_ios_per_sec": 0, 00:16:37.226 "rw_mbytes_per_sec": 0, 00:16:37.226 "r_mbytes_per_sec": 0, 00:16:37.226 "w_mbytes_per_sec": 0 00:16:37.226 }, 00:16:37.226 "claimed": false, 00:16:37.226 "zoned": false, 00:16:37.226 "supported_io_types": { 00:16:37.226 "read": true, 00:16:37.226 "write": true, 00:16:37.226 "unmap": true, 00:16:37.226 "write_zeroes": true, 00:16:37.226 "flush": false, 00:16:37.226 "reset": true, 00:16:37.226 "compare": false, 00:16:37.226 "compare_and_write": false, 00:16:37.226 "abort": false, 00:16:37.226 "nvme_admin": false, 00:16:37.226 "nvme_io": false 00:16:37.226 }, 00:16:37.226 "driver_specific": { 00:16:37.226 "lvol": { 00:16:37.226 "lvol_store_uuid": "271af190-802c-4120-87b6-89f4f0e09460", 00:16:37.226 "base_bdev": "nvme0n1", 00:16:37.226 "thin_provision": true, 00:16:37.226 "snapshot": false, 00:16:37.226 "clone": false, 00:16:37.226 "esnap_clone": false 00:16:37.226 } 00:16:37.226 } 00:16:37.226 } 00:16:37.226 ]' 00:16:37.226 16:25:56 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:16:37.226 16:25:56 -- common/autotest_common.sh@1372 -- # bs=4096 00:16:37.226 16:25:56 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:16:37.226 16:25:56 -- common/autotest_common.sh@1373 -- # nb=26476544 00:16:37.226 16:25:56 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:16:37.226 16:25:56 -- common/autotest_common.sh@1377 -- # echo 103424 00:16:37.226 16:25:56 -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:16:37.226 16:25:56 -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e85f28f6-7778-47d9-add1-39349489ea2b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:16:37.487 [2024-11-09 16:25:57.032114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.487 [2024-11-09 16:25:57.032251] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:37.487 [2024-11-09 16:25:57.032271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:37.487 [2024-11-09 16:25:57.032277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.487 [2024-11-09 16:25:57.034519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.487 [2024-11-09 16:25:57.034548] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:37.487 [2024-11-09 16:25:57.034557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.213 ms 00:16:37.487 [2024-11-09 16:25:57.034563] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.487 [2024-11-09 16:25:57.034644] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:37.487 [2024-11-09 16:25:57.035221] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:37.487 [2024-11-09 16:25:57.035252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.487 [2024-11-09 16:25:57.035258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:37.487 [2024-11-09 16:25:57.035266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:16:37.487 [2024-11-09 16:25:57.035272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.487 [2024-11-09 16:25:57.035364] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0532df08-a2cd-46ce-aed4-05877063f95a 00:16:37.487 [2024-11-09 16:25:57.036360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.487 [2024-11-09 16:25:57.036387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:37.487 [2024-11-09 16:25:57.036395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:16:37.487 [2024-11-09 16:25:57.036402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.487 [2024-11-09 16:25:57.041523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.487 [2024-11-09 16:25:57.041550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:37.487 [2024-11-09 16:25:57.041557] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.039 ms 00:16:37.487 [2024-11-09 16:25:57.041564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.487 [2024-11-09 16:25:57.041666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.487 [2024-11-09 16:25:57.041675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:37.487 [2024-11-09 16:25:57.041682] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:16:37.487 [2024-11-09 16:25:57.041691] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.487 [2024-11-09 16:25:57.041717] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.487 [2024-11-09 16:25:57.041725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:37.487 [2024-11-09 16:25:57.041731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:37.487 [2024-11-09 16:25:57.041737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.487 [2024-11-09 16:25:57.041775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:37.487 [2024-11-09 16:25:57.044752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.487 [2024-11-09 16:25:57.044776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:37.487 [2024-11-09 16:25:57.044787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.981 ms 00:16:37.487 [2024-11-09 16:25:57.044792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.487 [2024-11-09 16:25:57.044853] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.487 [2024-11-09 16:25:57.044860] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:37.487 [2024-11-09 16:25:57.044867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:37.487 [2024-11-09 16:25:57.044872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.487 [2024-11-09 16:25:57.044905] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:37.487 [2024-11-09 16:25:57.044988] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:37.487 [2024-11-09 16:25:57.045000] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:37.487 [2024-11-09 16:25:57.045008] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:37.487 [2024-11-09 16:25:57.045017] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:37.487 [2024-11-09 16:25:57.045024] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:37.487 [2024-11-09 16:25:57.045032] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:37.487 [2024-11-09 16:25:57.045038] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:37.488 [2024-11-09 16:25:57.045046] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:37.488 [2024-11-09 16:25:57.045051] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:37.488 [2024-11-09 16:25:57.045059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.488 [2024-11-09 16:25:57.045064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:37.488 [2024-11-09 16:25:57.045072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.155 ms 00:16:37.488 [2024-11-09 16:25:57.045077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.488 [2024-11-09 16:25:57.045142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.488 [2024-11-09 16:25:57.045148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:37.488 [2024-11-09 16:25:57.045164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:37.488 [2024-11-09 16:25:57.045169] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.488 [2024-11-09 16:25:57.045279] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:37.488 [2024-11-09 16:25:57.045287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:37.488 [2024-11-09 16:25:57.045294] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:37.488 [2024-11-09 16:25:57.045301] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.488 [2024-11-09 16:25:57.045308] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:37.488 [2024-11-09 16:25:57.045313] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:37.488 [2024-11-09 16:25:57.045319] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:37.488 [2024-11-09 16:25:57.045324] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:37.488 [2024-11-09 16:25:57.045331] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:37.488 [2024-11-09 16:25:57.045335] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:37.488 [2024-11-09 16:25:57.045343] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:37.488 [2024-11-09 16:25:57.045348] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:37.488 [2024-11-09 16:25:57.045355] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:37.488 [2024-11-09 16:25:57.045360] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:37.488 [2024-11-09 16:25:57.045369] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:37.488 [2024-11-09 16:25:57.045375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.488 [2024-11-09 16:25:57.045382] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:37.488 [2024-11-09 16:25:57.045387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:37.488 [2024-11-09 16:25:57.045394] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.488 [2024-11-09 16:25:57.045399] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:37.488 [2024-11-09 16:25:57.045405] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:37.488 [2024-11-09 16:25:57.045410] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:37.488 [2024-11-09 16:25:57.045417] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:37.488 [2024-11-09 16:25:57.045422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:37.488 [2024-11-09 16:25:57.045429] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:37.488 [2024-11-09 16:25:57.045433] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:37.488 [2024-11-09 16:25:57.045440] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:37.488 [2024-11-09 16:25:57.045444] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:37.488 [2024-11-09 16:25:57.045450] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:37.488 [2024-11-09 16:25:57.045455] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:37.488 [2024-11-09 16:25:57.045462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:37.488 [2024-11-09 16:25:57.045467] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:37.488 [2024-11-09 16:25:57.045475] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:37.488 [2024-11-09 16:25:57.045480] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:37.488 [2024-11-09 16:25:57.045486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:37.488 [2024-11-09 16:25:57.045491] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:37.488 [2024-11-09 16:25:57.045497] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:37.488 [2024-11-09 16:25:57.045502] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:37.488 [2024-11-09 16:25:57.045508] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:37.488 [2024-11-09 16:25:57.045513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:37.488 [2024-11-09 16:25:57.045520] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:37.488 [2024-11-09 16:25:57.045525] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:37.488 [2024-11-09 16:25:57.045532] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:37.488 [2024-11-09 16:25:57.045538] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:37.488 [2024-11-09 16:25:57.045546] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:37.488 [2024-11-09 16:25:57.045551] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:37.488 [2024-11-09 16:25:57.045558] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:37.488 [2024-11-09 16:25:57.045564] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:37.488 [2024-11-09 16:25:57.045571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:37.488 [2024-11-09 16:25:57.045577] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:37.488 [2024-11-09 16:25:57.045585] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:37.488 [2024-11-09 16:25:57.045592] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:37.488 [2024-11-09 16:25:57.045600] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:37.488 [2024-11-09 16:25:57.045606] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:37.488 [2024-11-09 16:25:57.045613] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:37.488 [2024-11-09 16:25:57.045618] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:37.488 [2024-11-09 16:25:57.045625] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:37.488 [2024-11-09 16:25:57.045630] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:37.488 [2024-11-09 16:25:57.045637] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:37.488 [2024-11-09 16:25:57.045642] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:37.488 [2024-11-09 16:25:57.045649] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:37.489 [2024-11-09 16:25:57.045654] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:37.489 [2024-11-09 16:25:57.045661] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:37.489 [2024-11-09 16:25:57.045666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:37.489 [2024-11-09 16:25:57.045676] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:37.489 [2024-11-09 16:25:57.045681] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:37.489 [2024-11-09 16:25:57.045689] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:37.489 [2024-11-09 16:25:57.045695] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:37.489 [2024-11-09 16:25:57.045702] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:37.489 [2024-11-09 16:25:57.045707] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:37.489 [2024-11-09 16:25:57.045718] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:37.489 [2024-11-09 16:25:57.045724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.489 [2024-11-09 16:25:57.045731] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:37.489 [2024-11-09 16:25:57.045737] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:16:37.489 [2024-11-09 16:25:57.045744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.489 [2024-11-09 16:25:57.057995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.489 [2024-11-09 16:25:57.058109] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:37.489 [2024-11-09 16:25:57.058121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.162 ms 00:16:37.489 [2024-11-09 16:25:57.058128] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.489 [2024-11-09 16:25:57.058258] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.489 [2024-11-09 16:25:57.058270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:37.489 [2024-11-09 16:25:57.058278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:16:37.489 [2024-11-09 16:25:57.058285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.489 [2024-11-09 16:25:57.083557] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.489 [2024-11-09 16:25:57.083592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:37.489 [2024-11-09 16:25:57.083600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.241 ms 00:16:37.489 [2024-11-09 16:25:57.083608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.489 [2024-11-09 16:25:57.083665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.489 [2024-11-09 16:25:57.083674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:37.489 [2024-11-09 16:25:57.083681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:16:37.489 [2024-11-09 16:25:57.083692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.489 [2024-11-09 16:25:57.083992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.489 [2024-11-09 16:25:57.084005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:37.489 [2024-11-09 16:25:57.084012] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:16:37.489 [2024-11-09 16:25:57.084019] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.489 [2024-11-09 16:25:57.084121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.489 [2024-11-09 16:25:57.084130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:37.489 [2024-11-09 16:25:57.084136] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:37.489 [2024-11-09 16:25:57.084144] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.489 [2024-11-09 16:25:57.107188] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.489 [2024-11-09 16:25:57.107268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:37.489 [2024-11-09 16:25:57.107288] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.008 ms 00:16:37.489 [2024-11-09 16:25:57.107304] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.489 [2024-11-09 16:25:57.118873] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:37.489 [2024-11-09 16:25:57.131452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.489 [2024-11-09 16:25:57.131478] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:37.489 [2024-11-09 16:25:57.131488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.991 ms 00:16:37.489 [2024-11-09 16:25:57.131494] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.489 [2024-11-09 16:25:57.207847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:37.489 [2024-11-09 16:25:57.207996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:37.489 [2024-11-09 16:25:57.208020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.293 ms 00:16:37.489 [2024-11-09 16:25:57.208029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:37.489 [2024-11-09 16:25:57.208108] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:16:37.489 [2024-11-09 16:25:57.208121] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:16:40.082 [2024-11-09 16:25:59.646332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.082 [2024-11-09 16:25:59.646389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:40.082 [2024-11-09 16:25:59.646407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2438.209 ms 00:16:40.082 [2024-11-09 16:25:59.646415] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.082 [2024-11-09 16:25:59.646651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.082 [2024-11-09 16:25:59.646666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:40.082 [2024-11-09 16:25:59.646677] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:16:40.082 [2024-11-09 16:25:59.646685] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.082 [2024-11-09 16:25:59.671065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.082 [2024-11-09 16:25:59.671099] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:40.082 [2024-11-09 16:25:59.671112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.334 ms 00:16:40.082 [2024-11-09 16:25:59.671120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.082 [2024-11-09 16:25:59.694063] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.082 [2024-11-09 16:25:59.694093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:40.082 [2024-11-09 16:25:59.694108] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.875 ms 00:16:40.082 [2024-11-09 16:25:59.694116] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.082 [2024-11-09 16:25:59.694463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.082 [2024-11-09 16:25:59.694481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:40.082 [2024-11-09 16:25:59.694492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:16:40.083 [2024-11-09 16:25:59.694503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.083 [2024-11-09 16:25:59.755477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.083 [2024-11-09 16:25:59.755509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:40.083 [2024-11-09 16:25:59.755522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.926 ms 00:16:40.083 [2024-11-09 16:25:59.755530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.083 [2024-11-09 16:25:59.779591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.083 [2024-11-09 16:25:59.779630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:40.083 [2024-11-09 16:25:59.779643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.984 ms 00:16:40.083 [2024-11-09 16:25:59.779650] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.083 [2024-11-09 16:25:59.784019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.083 [2024-11-09 16:25:59.784048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:40.083 [2024-11-09 16:25:59.784060] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.302 ms 00:16:40.083 [2024-11-09 16:25:59.784068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.083 [2024-11-09 16:25:59.807558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.083 [2024-11-09 16:25:59.807583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:40.083 [2024-11-09 16:25:59.807595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.422 ms 00:16:40.083 [2024-11-09 16:25:59.807602] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.083 [2024-11-09 16:25:59.807675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.083 [2024-11-09 16:25:59.807685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:40.083 [2024-11-09 16:25:59.807695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:40.083 [2024-11-09 16:25:59.807702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.083 [2024-11-09 16:25:59.807784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:40.083 [2024-11-09 16:25:59.807807] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:40.083 [2024-11-09 16:25:59.807817] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:16:40.083 [2024-11-09 16:25:59.807824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:40.083 [2024-11-09 16:25:59.808578] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:40.083 [2024-11-09 16:25:59.811668] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2776.192 ms, result 0 00:16:40.083 [2024-11-09 16:25:59.812622] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:40.083 { 00:16:40.083 "name": "ftl0", 00:16:40.083 "uuid": "0532df08-a2cd-46ce-aed4-05877063f95a" 00:16:40.083 } 00:16:40.083 16:25:59 -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:16:40.083 16:25:59 -- common/autotest_common.sh@897 -- # local bdev_name=ftl0 00:16:40.083 16:25:59 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:40.083 16:25:59 -- common/autotest_common.sh@899 -- # local i 00:16:40.083 16:25:59 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:40.083 16:25:59 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:40.083 16:25:59 -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:40.349 16:26:00 -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:40.610 [ 00:16:40.610 { 00:16:40.610 "name": "ftl0", 00:16:40.610 "aliases": [ 00:16:40.610 "0532df08-a2cd-46ce-aed4-05877063f95a" 00:16:40.610 ], 00:16:40.610 "product_name": "FTL disk", 00:16:40.610 "block_size": 4096, 00:16:40.610 "num_blocks": 23592960, 00:16:40.610 "uuid": "0532df08-a2cd-46ce-aed4-05877063f95a", 00:16:40.610 "assigned_rate_limits": { 00:16:40.610 "rw_ios_per_sec": 0, 00:16:40.610 "rw_mbytes_per_sec": 0, 00:16:40.610 "r_mbytes_per_sec": 0, 00:16:40.610 "w_mbytes_per_sec": 0 00:16:40.610 }, 00:16:40.610 "claimed": false, 00:16:40.610 "zoned": false, 00:16:40.610 "supported_io_types": { 00:16:40.610 "read": true, 00:16:40.610 "write": true, 00:16:40.610 "unmap": true, 00:16:40.610 "write_zeroes": true, 00:16:40.610 "flush": true, 00:16:40.610 "reset": false, 00:16:40.610 "compare": false, 00:16:40.610 "compare_and_write": false, 00:16:40.610 "abort": false, 00:16:40.610 "nvme_admin": false, 00:16:40.610 "nvme_io": false 00:16:40.610 }, 00:16:40.610 "driver_specific": { 00:16:40.610 "ftl": { 00:16:40.610 "base_bdev": "e85f28f6-7778-47d9-add1-39349489ea2b", 00:16:40.610 "cache": "nvc0n1p0" 00:16:40.610 } 00:16:40.610 } 00:16:40.610 } 00:16:40.610 ] 00:16:40.610 16:26:00 -- common/autotest_common.sh@905 -- # return 0 00:16:40.610 16:26:00 -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:16:40.610 16:26:00 -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:40.869 16:26:00 -- ftl/trim.sh@56 -- # echo ']}' 00:16:40.869 16:26:00 -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:16:40.869 16:26:00 -- ftl/trim.sh@59 -- # bdev_info='[ 00:16:40.869 { 00:16:40.869 "name": "ftl0", 00:16:40.869 "aliases": [ 00:16:40.869 "0532df08-a2cd-46ce-aed4-05877063f95a" 00:16:40.869 ], 00:16:40.869 "product_name": "FTL disk", 00:16:40.869 "block_size": 4096, 00:16:40.869 "num_blocks": 23592960, 00:16:40.869 "uuid": "0532df08-a2cd-46ce-aed4-05877063f95a", 00:16:40.869 "assigned_rate_limits": { 00:16:40.869 "rw_ios_per_sec": 0, 00:16:40.869 "rw_mbytes_per_sec": 0, 00:16:40.869 "r_mbytes_per_sec": 0, 00:16:40.869 "w_mbytes_per_sec": 0 00:16:40.869 }, 00:16:40.869 "claimed": false, 00:16:40.869 "zoned": false, 00:16:40.869 "supported_io_types": { 00:16:40.869 "read": true, 00:16:40.869 "write": true, 00:16:40.869 "unmap": true, 00:16:40.869 "write_zeroes": true, 00:16:40.869 "flush": true, 00:16:40.869 "reset": false, 00:16:40.869 "compare": false, 00:16:40.869 "compare_and_write": false, 00:16:40.869 "abort": false, 00:16:40.869 "nvme_admin": false, 00:16:40.869 "nvme_io": false 00:16:40.869 }, 00:16:40.869 "driver_specific": { 00:16:40.869 "ftl": { 00:16:40.869 "base_bdev": "e85f28f6-7778-47d9-add1-39349489ea2b", 00:16:40.869 "cache": "nvc0n1p0" 00:16:40.869 } 00:16:40.869 } 00:16:40.869 } 00:16:40.869 ]' 00:16:40.869 16:26:00 -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:16:40.869 16:26:00 -- ftl/trim.sh@60 -- # nb=23592960 00:16:40.869 16:26:00 -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:41.128 [2024-11-09 16:26:00.784491] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.128 [2024-11-09 16:26:00.784522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:41.128 [2024-11-09 16:26:00.784532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:41.128 [2024-11-09 16:26:00.784540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.128 [2024-11-09 16:26:00.784574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:16:41.128 [2024-11-09 16:26:00.786605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.128 [2024-11-09 16:26:00.786627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:41.128 [2024-11-09 16:26:00.786639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.016 ms 00:16:41.128 [2024-11-09 16:26:00.786645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.128 [2024-11-09 16:26:00.787244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.128 [2024-11-09 16:26:00.787256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:41.128 [2024-11-09 16:26:00.787265] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:16:41.128 [2024-11-09 16:26:00.787272] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.128 [2024-11-09 16:26:00.790030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.128 [2024-11-09 16:26:00.790045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:41.128 [2024-11-09 16:26:00.790057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.723 ms 00:16:41.128 [2024-11-09 16:26:00.790063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.128 [2024-11-09 16:26:00.795284] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.128 [2024-11-09 16:26:00.795306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:16:41.128 [2024-11-09 16:26:00.795315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.176 ms 00:16:41.128 [2024-11-09 16:26:00.795322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.128 [2024-11-09 16:26:00.813498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.128 [2024-11-09 16:26:00.813521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:41.128 [2024-11-09 16:26:00.813530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.083 ms 00:16:41.128 [2024-11-09 16:26:00.813536] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.128 [2024-11-09 16:26:00.825990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.128 [2024-11-09 16:26:00.826014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:41.128 [2024-11-09 16:26:00.826024] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.393 ms 00:16:41.128 [2024-11-09 16:26:00.826031] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.128 [2024-11-09 16:26:00.826221] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.128 [2024-11-09 16:26:00.826241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:41.128 [2024-11-09 16:26:00.826253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:16:41.128 [2024-11-09 16:26:00.826259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.128 [2024-11-09 16:26:00.844376] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.128 [2024-11-09 16:26:00.844397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:16:41.128 [2024-11-09 16:26:00.844406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.083 ms 00:16:41.128 [2024-11-09 16:26:00.844411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.128 [2024-11-09 16:26:00.862096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.128 [2024-11-09 16:26:00.862116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:16:41.128 [2024-11-09 16:26:00.862124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.624 ms 00:16:41.128 [2024-11-09 16:26:00.862130] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.128 [2024-11-09 16:26:00.879285] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.128 [2024-11-09 16:26:00.879306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:41.128 [2024-11-09 16:26:00.879315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.099 ms 00:16:41.128 [2024-11-09 16:26:00.879321] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.389 [2024-11-09 16:26:00.896336] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.389 [2024-11-09 16:26:00.896357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:41.389 [2024-11-09 16:26:00.896367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.917 ms 00:16:41.389 [2024-11-09 16:26:00.896372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.389 [2024-11-09 16:26:00.896421] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:41.389 [2024-11-09 16:26:00.896432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:41.389 [2024-11-09 16:26:00.896886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.896999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:41.390 [2024-11-09 16:26:00.897095] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:41.390 [2024-11-09 16:26:00.897103] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0532df08-a2cd-46ce-aed4-05877063f95a 00:16:41.390 [2024-11-09 16:26:00.897109] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:41.390 [2024-11-09 16:26:00.897116] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:41.390 [2024-11-09 16:26:00.897121] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:41.390 [2024-11-09 16:26:00.897128] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:41.390 [2024-11-09 16:26:00.897133] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:41.390 [2024-11-09 16:26:00.897139] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:41.390 [2024-11-09 16:26:00.897145] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:41.390 [2024-11-09 16:26:00.897158] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:41.390 [2024-11-09 16:26:00.897163] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:41.390 [2024-11-09 16:26:00.897170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.390 [2024-11-09 16:26:00.897177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:41.390 [2024-11-09 16:26:00.897185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:16:41.390 [2024-11-09 16:26:00.897190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:00.906780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.390 [2024-11-09 16:26:00.906801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:41.390 [2024-11-09 16:26:00.906809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.555 ms 00:16:41.390 [2024-11-09 16:26:00.906815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:00.907001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:41.390 [2024-11-09 16:26:00.907008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:41.390 [2024-11-09 16:26:00.907016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:16:41.390 [2024-11-09 16:26:00.907021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:00.941546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.390 [2024-11-09 16:26:00.941569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:41.390 [2024-11-09 16:26:00.941579] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.390 [2024-11-09 16:26:00.941585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:00.941666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.390 [2024-11-09 16:26:00.941672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:41.390 [2024-11-09 16:26:00.941680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.390 [2024-11-09 16:26:00.941686] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:00.941741] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.390 [2024-11-09 16:26:00.941748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:41.390 [2024-11-09 16:26:00.941755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.390 [2024-11-09 16:26:00.941760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:00.941792] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.390 [2024-11-09 16:26:00.941800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:41.390 [2024-11-09 16:26:00.941806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.390 [2024-11-09 16:26:00.941811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:01.006927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.390 [2024-11-09 16:26:01.006956] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:41.390 [2024-11-09 16:26:01.006969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.390 [2024-11-09 16:26:01.006975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:01.029486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.390 [2024-11-09 16:26:01.029512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:41.390 [2024-11-09 16:26:01.029521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.390 [2024-11-09 16:26:01.029528] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:01.029589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.390 [2024-11-09 16:26:01.029596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:41.390 [2024-11-09 16:26:01.029604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.390 [2024-11-09 16:26:01.029609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:01.029657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.390 [2024-11-09 16:26:01.029662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:41.390 [2024-11-09 16:26:01.029672] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.390 [2024-11-09 16:26:01.029688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:01.029778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.390 [2024-11-09 16:26:01.029789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:41.390 [2024-11-09 16:26:01.029799] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.390 [2024-11-09 16:26:01.029804] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:01.029861] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.390 [2024-11-09 16:26:01.029868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:41.390 [2024-11-09 16:26:01.029877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.390 [2024-11-09 16:26:01.029882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:01.029931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.390 [2024-11-09 16:26:01.029937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:41.390 [2024-11-09 16:26:01.029945] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.390 [2024-11-09 16:26:01.029950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:01.030005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:41.390 [2024-11-09 16:26:01.030012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:41.390 [2024-11-09 16:26:01.030021] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:41.390 [2024-11-09 16:26:01.030026] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:41.390 [2024-11-09 16:26:01.030189] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 245.676 ms, result 0 00:16:41.390 true 00:16:41.391 16:26:01 -- ftl/trim.sh@63 -- # killprocess 72070 00:16:41.391 16:26:01 -- common/autotest_common.sh@936 -- # '[' -z 72070 ']' 00:16:41.391 16:26:01 -- common/autotest_common.sh@940 -- # kill -0 72070 00:16:41.391 16:26:01 -- common/autotest_common.sh@941 -- # uname 00:16:41.391 16:26:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:41.391 16:26:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72070 00:16:41.391 16:26:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:41.391 16:26:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:41.391 killing process with pid 72070 00:16:41.391 16:26:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72070' 00:16:41.391 16:26:01 -- common/autotest_common.sh@955 -- # kill 72070 00:16:41.391 16:26:01 -- common/autotest_common.sh@960 -- # wait 72070 00:16:47.974 16:26:06 -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:16:48.234 65536+0 records in 00:16:48.234 65536+0 records out 00:16:48.234 268435456 bytes (268 MB, 256 MiB) copied, 1.10345 s, 243 MB/s 00:16:48.234 16:26:07 -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:48.234 [2024-11-09 16:26:07.997328] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:16:48.234 [2024-11-09 16:26:07.999102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72284 ] 00:16:48.492 [2024-11-09 16:26:08.147928] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.751 [2024-11-09 16:26:08.285976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.751 [2024-11-09 16:26:08.489380] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:48.751 [2024-11-09 16:26:08.489432] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:16:49.012 [2024-11-09 16:26:08.645515] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.012 [2024-11-09 16:26:08.645560] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:49.012 [2024-11-09 16:26:08.645573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:49.012 [2024-11-09 16:26:08.645581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.012 [2024-11-09 16:26:08.648312] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.012 [2024-11-09 16:26:08.648346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:49.012 [2024-11-09 16:26:08.648357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.714 ms 00:16:49.012 [2024-11-09 16:26:08.648365] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.012 [2024-11-09 16:26:08.648431] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:49.013 [2024-11-09 16:26:08.649145] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:49.013 [2024-11-09 16:26:08.649179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.013 [2024-11-09 16:26:08.649187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:49.013 [2024-11-09 16:26:08.649196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:16:49.013 [2024-11-09 16:26:08.649203] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.013 [2024-11-09 16:26:08.650643] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:16:49.013 [2024-11-09 16:26:08.664409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.013 [2024-11-09 16:26:08.664445] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:16:49.013 [2024-11-09 16:26:08.664457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.768 ms 00:16:49.013 [2024-11-09 16:26:08.664464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.013 [2024-11-09 16:26:08.664550] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.013 [2024-11-09 16:26:08.664561] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:16:49.013 [2024-11-09 16:26:08.664570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:49.013 [2024-11-09 16:26:08.664577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.013 [2024-11-09 16:26:08.671423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.013 [2024-11-09 16:26:08.671451] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:49.013 [2024-11-09 16:26:08.671461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.803 ms 00:16:49.013 [2024-11-09 16:26:08.671473] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.013 [2024-11-09 16:26:08.671573] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.013 [2024-11-09 16:26:08.671584] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:49.013 [2024-11-09 16:26:08.671592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:49.013 [2024-11-09 16:26:08.671600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.013 [2024-11-09 16:26:08.671630] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.013 [2024-11-09 16:26:08.671640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:49.013 [2024-11-09 16:26:08.671648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:49.013 [2024-11-09 16:26:08.671655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.013 [2024-11-09 16:26:08.671685] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:16:49.013 [2024-11-09 16:26:08.675517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.013 [2024-11-09 16:26:08.675545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:49.013 [2024-11-09 16:26:08.675554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.846 ms 00:16:49.013 [2024-11-09 16:26:08.675565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.013 [2024-11-09 16:26:08.675626] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.013 [2024-11-09 16:26:08.675635] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:49.013 [2024-11-09 16:26:08.675643] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:16:49.013 [2024-11-09 16:26:08.675651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.013 [2024-11-09 16:26:08.675669] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:16:49.013 [2024-11-09 16:26:08.675689] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:16:49.013 [2024-11-09 16:26:08.675722] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:16:49.013 [2024-11-09 16:26:08.675741] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:16:49.013 [2024-11-09 16:26:08.675815] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:16:49.013 [2024-11-09 16:26:08.675825] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:49.013 [2024-11-09 16:26:08.675835] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:16:49.013 [2024-11-09 16:26:08.675846] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:49.013 [2024-11-09 16:26:08.675854] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:49.013 [2024-11-09 16:26:08.675862] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:16:49.013 [2024-11-09 16:26:08.675869] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:49.013 [2024-11-09 16:26:08.675876] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:16:49.013 [2024-11-09 16:26:08.675886] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:16:49.013 [2024-11-09 16:26:08.675894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.013 [2024-11-09 16:26:08.675901] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:49.013 [2024-11-09 16:26:08.675908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.227 ms 00:16:49.013 [2024-11-09 16:26:08.675916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.013 [2024-11-09 16:26:08.675981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.013 [2024-11-09 16:26:08.675991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:49.013 [2024-11-09 16:26:08.675998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:16:49.013 [2024-11-09 16:26:08.676005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.013 [2024-11-09 16:26:08.676083] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:49.013 [2024-11-09 16:26:08.676094] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:49.013 [2024-11-09 16:26:08.676103] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:49.013 [2024-11-09 16:26:08.676111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.013 [2024-11-09 16:26:08.676119] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:49.013 [2024-11-09 16:26:08.676127] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:49.013 [2024-11-09 16:26:08.676134] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:16:49.013 [2024-11-09 16:26:08.676141] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:49.013 [2024-11-09 16:26:08.676149] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:16:49.013 [2024-11-09 16:26:08.676156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:49.013 [2024-11-09 16:26:08.676163] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:49.013 [2024-11-09 16:26:08.676172] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:16:49.013 [2024-11-09 16:26:08.676179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:49.013 [2024-11-09 16:26:08.676186] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:49.013 [2024-11-09 16:26:08.676199] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:16:49.013 [2024-11-09 16:26:08.676206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.013 [2024-11-09 16:26:08.676213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:49.013 [2024-11-09 16:26:08.676220] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:16:49.013 [2024-11-09 16:26:08.676239] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.013 [2024-11-09 16:26:08.676247] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:16:49.013 [2024-11-09 16:26:08.676254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:16:49.013 [2024-11-09 16:26:08.676261] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:16:49.013 [2024-11-09 16:26:08.676268] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:49.013 [2024-11-09 16:26:08.676274] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:16:49.013 [2024-11-09 16:26:08.676281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:49.013 [2024-11-09 16:26:08.676289] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:49.013 [2024-11-09 16:26:08.676296] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:16:49.013 [2024-11-09 16:26:08.676303] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:49.013 [2024-11-09 16:26:08.676310] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:49.013 [2024-11-09 16:26:08.676317] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:16:49.013 [2024-11-09 16:26:08.676324] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:49.013 [2024-11-09 16:26:08.676330] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:49.013 [2024-11-09 16:26:08.676337] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:16:49.013 [2024-11-09 16:26:08.676344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:16:49.013 [2024-11-09 16:26:08.676351] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:49.013 [2024-11-09 16:26:08.676357] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:16:49.013 [2024-11-09 16:26:08.676363] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:49.013 [2024-11-09 16:26:08.676370] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:49.013 [2024-11-09 16:26:08.676377] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:16:49.013 [2024-11-09 16:26:08.676383] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:49.013 [2024-11-09 16:26:08.676390] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:49.013 [2024-11-09 16:26:08.676397] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:49.013 [2024-11-09 16:26:08.676405] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:49.013 [2024-11-09 16:26:08.676418] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:49.013 [2024-11-09 16:26:08.676426] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:49.013 [2024-11-09 16:26:08.676433] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:49.013 [2024-11-09 16:26:08.676439] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:49.013 [2024-11-09 16:26:08.676446] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:49.013 [2024-11-09 16:26:08.676454] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:49.014 [2024-11-09 16:26:08.676461] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:49.014 [2024-11-09 16:26:08.676469] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:49.014 [2024-11-09 16:26:08.676478] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:49.014 [2024-11-09 16:26:08.676487] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:16:49.014 [2024-11-09 16:26:08.676494] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:16:49.014 [2024-11-09 16:26:08.676502] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:16:49.014 [2024-11-09 16:26:08.676510] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:16:49.014 [2024-11-09 16:26:08.676517] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:16:49.014 [2024-11-09 16:26:08.676524] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:16:49.014 [2024-11-09 16:26:08.676532] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:16:49.014 [2024-11-09 16:26:08.676540] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:16:49.014 [2024-11-09 16:26:08.676548] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:16:49.014 [2024-11-09 16:26:08.676556] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:16:49.014 [2024-11-09 16:26:08.676563] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:16:49.014 [2024-11-09 16:26:08.676570] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:16:49.014 [2024-11-09 16:26:08.676578] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:16:49.014 [2024-11-09 16:26:08.676585] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:49.014 [2024-11-09 16:26:08.676598] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:49.014 [2024-11-09 16:26:08.676606] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:49.014 [2024-11-09 16:26:08.676614] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:49.014 [2024-11-09 16:26:08.676621] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:49.014 [2024-11-09 16:26:08.676628] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:49.014 [2024-11-09 16:26:08.676637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.014 [2024-11-09 16:26:08.676645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:49.014 [2024-11-09 16:26:08.676653] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:16:49.014 [2024-11-09 16:26:08.676660] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.014 [2024-11-09 16:26:08.693689] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.014 [2024-11-09 16:26:08.693721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:49.014 [2024-11-09 16:26:08.693732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.984 ms 00:16:49.014 [2024-11-09 16:26:08.693741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.014 [2024-11-09 16:26:08.693857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.014 [2024-11-09 16:26:08.693867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:49.014 [2024-11-09 16:26:08.693876] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:16:49.014 [2024-11-09 16:26:08.693885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.014 [2024-11-09 16:26:08.739321] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.014 [2024-11-09 16:26:08.739360] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:49.014 [2024-11-09 16:26:08.739372] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.414 ms 00:16:49.014 [2024-11-09 16:26:08.739381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.014 [2024-11-09 16:26:08.739450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.014 [2024-11-09 16:26:08.739460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:49.014 [2024-11-09 16:26:08.739473] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:49.014 [2024-11-09 16:26:08.739480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.014 [2024-11-09 16:26:08.739921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.014 [2024-11-09 16:26:08.739938] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:49.014 [2024-11-09 16:26:08.739947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.421 ms 00:16:49.014 [2024-11-09 16:26:08.739956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.014 [2024-11-09 16:26:08.740083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.014 [2024-11-09 16:26:08.740093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:49.014 [2024-11-09 16:26:08.740101] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:16:49.014 [2024-11-09 16:26:08.740109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.014 [2024-11-09 16:26:08.764039] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.014 [2024-11-09 16:26:08.764086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:49.014 [2024-11-09 16:26:08.764100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.903 ms 00:16:49.014 [2024-11-09 16:26:08.764112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.014 [2024-11-09 16:26:08.776653] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:16:49.014 [2024-11-09 16:26:08.776695] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:16:49.014 [2024-11-09 16:26:08.776709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.014 [2024-11-09 16:26:08.776719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:16:49.014 [2024-11-09 16:26:08.776729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.439 ms 00:16:49.014 [2024-11-09 16:26:08.776736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.276 [2024-11-09 16:26:08.801248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.276 [2024-11-09 16:26:08.801289] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:16:49.276 [2024-11-09 16:26:08.801307] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.434 ms 00:16:49.276 [2024-11-09 16:26:08.801315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.276 [2024-11-09 16:26:08.813185] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.276 [2024-11-09 16:26:08.813219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:16:49.276 [2024-11-09 16:26:08.813242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.794 ms 00:16:49.276 [2024-11-09 16:26:08.813258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.276 [2024-11-09 16:26:08.824950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.276 [2024-11-09 16:26:08.824985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:16:49.276 [2024-11-09 16:26:08.824995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.629 ms 00:16:49.276 [2024-11-09 16:26:08.825002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.276 [2024-11-09 16:26:08.825416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.276 [2024-11-09 16:26:08.825435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:49.276 [2024-11-09 16:26:08.825444] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:16:49.276 [2024-11-09 16:26:08.825451] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.276 [2024-11-09 16:26:08.885795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.276 [2024-11-09 16:26:08.885840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:16:49.277 [2024-11-09 16:26:08.885853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.320 ms 00:16:49.277 [2024-11-09 16:26:08.885860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.277 [2024-11-09 16:26:08.896814] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:49.277 [2024-11-09 16:26:08.913618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.277 [2024-11-09 16:26:08.913666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:49.277 [2024-11-09 16:26:08.913679] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.669 ms 00:16:49.277 [2024-11-09 16:26:08.913687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.277 [2024-11-09 16:26:08.913764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.277 [2024-11-09 16:26:08.913774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:16:49.277 [2024-11-09 16:26:08.913783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:49.277 [2024-11-09 16:26:08.913794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.277 [2024-11-09 16:26:08.913845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.277 [2024-11-09 16:26:08.913858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:49.277 [2024-11-09 16:26:08.913866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:49.277 [2024-11-09 16:26:08.913874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.277 [2024-11-09 16:26:08.915175] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.277 [2024-11-09 16:26:08.915216] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:16:49.277 [2024-11-09 16:26:08.915248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.276 ms 00:16:49.277 [2024-11-09 16:26:08.915256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.277 [2024-11-09 16:26:08.915291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.277 [2024-11-09 16:26:08.915300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:49.277 [2024-11-09 16:26:08.915311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:49.277 [2024-11-09 16:26:08.915319] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.277 [2024-11-09 16:26:08.915354] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:16:49.277 [2024-11-09 16:26:08.915363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.277 [2024-11-09 16:26:08.915371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:16:49.277 [2024-11-09 16:26:08.915379] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:49.277 [2024-11-09 16:26:08.915387] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.277 [2024-11-09 16:26:08.940528] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.277 [2024-11-09 16:26:08.940580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:49.277 [2024-11-09 16:26:08.940592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.116 ms 00:16:49.277 [2024-11-09 16:26:08.940600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.277 [2024-11-09 16:26:08.940705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:49.277 [2024-11-09 16:26:08.940715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:49.277 [2024-11-09 16:26:08.940726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:49.277 [2024-11-09 16:26:08.940733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:49.277 [2024-11-09 16:26:08.941838] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:49.277 [2024-11-09 16:26:08.945421] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.983 ms, result 0 00:16:49.277 [2024-11-09 16:26:08.946465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:49.277 [2024-11-09 16:26:08.960609] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:50.220  [2024-11-09T16:26:11.376Z] Copying: 14/256 [MB] (14 MBps) [2024-11-09T16:26:12.321Z] Copying: 31/256 [MB] (16 MBps) [2024-11-09T16:26:13.266Z] Copying: 46/256 [MB] (15 MBps) [2024-11-09T16:26:14.209Z] Copying: 64/256 [MB] (18 MBps) [2024-11-09T16:26:15.153Z] Copying: 77/256 [MB] (12 MBps) [2024-11-09T16:26:16.097Z] Copying: 93/256 [MB] (16 MBps) [2024-11-09T16:26:17.043Z] Copying: 103/256 [MB] (10 MBps) [2024-11-09T16:26:17.988Z] Copying: 115/256 [MB] (11 MBps) [2024-11-09T16:26:18.966Z] Copying: 130/256 [MB] (15 MBps) [2024-11-09T16:26:20.353Z] Copying: 142/256 [MB] (12 MBps) [2024-11-09T16:26:21.297Z] Copying: 160/256 [MB] (17 MBps) [2024-11-09T16:26:22.238Z] Copying: 176/256 [MB] (15 MBps) [2024-11-09T16:26:23.180Z] Copying: 190/256 [MB] (13 MBps) [2024-11-09T16:26:24.121Z] Copying: 204/256 [MB] (13 MBps) [2024-11-09T16:26:25.065Z] Copying: 215/256 [MB] (11 MBps) [2024-11-09T16:26:26.004Z] Copying: 225/256 [MB] (10 MBps) [2024-11-09T16:26:26.264Z] Copying: 253/256 [MB] (27 MBps) [2024-11-09T16:26:26.264Z] Copying: 256/256 [MB] (average 14 MBps)[2024-11-09 16:26:26.036771] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:06.494 [2024-11-09 16:26:26.044122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.494 [2024-11-09 16:26:26.044155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:06.494 [2024-11-09 16:26:26.044172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:06.494 [2024-11-09 16:26:26.044179] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.494 [2024-11-09 16:26:26.044196] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:06.494 [2024-11-09 16:26:26.046337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.494 [2024-11-09 16:26:26.046362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:06.494 [2024-11-09 16:26:26.046371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.129 ms 00:17:06.494 [2024-11-09 16:26:26.046378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.494 [2024-11-09 16:26:26.047979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.494 [2024-11-09 16:26:26.048011] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:06.494 [2024-11-09 16:26:26.048018] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.582 ms 00:17:06.494 [2024-11-09 16:26:26.048023] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.494 [2024-11-09 16:26:26.053569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.494 [2024-11-09 16:26:26.053596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:06.494 [2024-11-09 16:26:26.053604] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.527 ms 00:17:06.494 [2024-11-09 16:26:26.053610] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.494 [2024-11-09 16:26:26.058963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.494 [2024-11-09 16:26:26.058989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:06.494 [2024-11-09 16:26:26.058997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.318 ms 00:17:06.494 [2024-11-09 16:26:26.059003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.494 [2024-11-09 16:26:26.076823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.494 [2024-11-09 16:26:26.076853] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:06.494 [2024-11-09 16:26:26.076862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.769 ms 00:17:06.494 [2024-11-09 16:26:26.076868] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.494 [2024-11-09 16:26:26.088692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.494 [2024-11-09 16:26:26.088722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:06.494 [2024-11-09 16:26:26.088731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.787 ms 00:17:06.494 [2024-11-09 16:26:26.088738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.494 [2024-11-09 16:26:26.088842] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.494 [2024-11-09 16:26:26.088849] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:06.494 [2024-11-09 16:26:26.088856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:17:06.494 [2024-11-09 16:26:26.088862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.494 [2024-11-09 16:26:26.107012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.494 [2024-11-09 16:26:26.107040] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:06.494 [2024-11-09 16:26:26.107048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.137 ms 00:17:06.494 [2024-11-09 16:26:26.107052] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.494 [2024-11-09 16:26:26.124808] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.494 [2024-11-09 16:26:26.124837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:06.494 [2024-11-09 16:26:26.124845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.710 ms 00:17:06.494 [2024-11-09 16:26:26.124849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.494 [2024-11-09 16:26:26.142240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.494 [2024-11-09 16:26:26.142265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:06.494 [2024-11-09 16:26:26.142273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.354 ms 00:17:06.494 [2024-11-09 16:26:26.142278] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.494 [2024-11-09 16:26:26.159858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.494 [2024-11-09 16:26:26.159886] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:06.494 [2024-11-09 16:26:26.159893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.527 ms 00:17:06.494 [2024-11-09 16:26:26.159899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.494 [2024-11-09 16:26:26.159933] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:06.494 [2024-11-09 16:26:26.159944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.159951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.159957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.159963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.159968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.159974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.159979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.159985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.159990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.159996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:06.494 [2024-11-09 16:26:26.160157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:06.495 [2024-11-09 16:26:26.160526] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:06.495 [2024-11-09 16:26:26.160532] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0532df08-a2cd-46ce-aed4-05877063f95a 00:17:06.495 [2024-11-09 16:26:26.160538] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:06.495 [2024-11-09 16:26:26.160543] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:06.495 [2024-11-09 16:26:26.160548] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:06.495 [2024-11-09 16:26:26.160554] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:06.495 [2024-11-09 16:26:26.160559] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:06.495 [2024-11-09 16:26:26.160565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:06.495 [2024-11-09 16:26:26.160573] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:06.495 [2024-11-09 16:26:26.160578] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:06.495 [2024-11-09 16:26:26.160582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:06.495 [2024-11-09 16:26:26.160588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.495 [2024-11-09 16:26:26.160593] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:06.495 [2024-11-09 16:26:26.160600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.655 ms 00:17:06.495 [2024-11-09 16:26:26.160605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.495 [2024-11-09 16:26:26.170064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.495 [2024-11-09 16:26:26.170088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:06.495 [2024-11-09 16:26:26.170096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.445 ms 00:17:06.495 [2024-11-09 16:26:26.170106] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.495 [2024-11-09 16:26:26.170278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:06.495 [2024-11-09 16:26:26.170291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:06.495 [2024-11-09 16:26:26.170297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:17:06.495 [2024-11-09 16:26:26.170303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.495 [2024-11-09 16:26:26.199995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.495 [2024-11-09 16:26:26.200024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:06.495 [2024-11-09 16:26:26.200032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.495 [2024-11-09 16:26:26.200041] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.495 [2024-11-09 16:26:26.200100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.495 [2024-11-09 16:26:26.200107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:06.495 [2024-11-09 16:26:26.200113] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.495 [2024-11-09 16:26:26.200118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.495 [2024-11-09 16:26:26.200148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.495 [2024-11-09 16:26:26.200155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:06.495 [2024-11-09 16:26:26.200161] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.495 [2024-11-09 16:26:26.200167] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.495 [2024-11-09 16:26:26.200182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.495 [2024-11-09 16:26:26.200188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:06.496 [2024-11-09 16:26:26.200194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.496 [2024-11-09 16:26:26.200199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.496 [2024-11-09 16:26:26.259097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.496 [2024-11-09 16:26:26.259133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:06.496 [2024-11-09 16:26:26.259143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.496 [2024-11-09 16:26:26.259152] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.756 [2024-11-09 16:26:26.281996] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.756 [2024-11-09 16:26:26.282030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:06.756 [2024-11-09 16:26:26.282039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.756 [2024-11-09 16:26:26.282046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.756 [2024-11-09 16:26:26.282091] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.756 [2024-11-09 16:26:26.282098] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:06.756 [2024-11-09 16:26:26.282104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.756 [2024-11-09 16:26:26.282110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.756 [2024-11-09 16:26:26.282133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.756 [2024-11-09 16:26:26.282142] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:06.756 [2024-11-09 16:26:26.282148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.756 [2024-11-09 16:26:26.282154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.756 [2024-11-09 16:26:26.282233] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.756 [2024-11-09 16:26:26.282242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:06.756 [2024-11-09 16:26:26.282248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.756 [2024-11-09 16:26:26.282253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.756 [2024-11-09 16:26:26.282278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.756 [2024-11-09 16:26:26.282287] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:06.756 [2024-11-09 16:26:26.282293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.756 [2024-11-09 16:26:26.282299] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.756 [2024-11-09 16:26:26.282326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.757 [2024-11-09 16:26:26.282333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:06.757 [2024-11-09 16:26:26.282339] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.757 [2024-11-09 16:26:26.282344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.757 [2024-11-09 16:26:26.282379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:06.757 [2024-11-09 16:26:26.282388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:06.757 [2024-11-09 16:26:26.282396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:06.757 [2024-11-09 16:26:26.282401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:06.757 [2024-11-09 16:26:26.282507] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 238.386 ms, result 0 00:17:07.692 00:17:07.692 00:17:07.692 16:26:27 -- ftl/trim.sh@72 -- # svcpid=72489 00:17:07.692 16:26:27 -- ftl/trim.sh@73 -- # waitforlisten 72489 00:17:07.692 16:26:27 -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:07.692 16:26:27 -- common/autotest_common.sh@829 -- # '[' -z 72489 ']' 00:17:07.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:07.692 16:26:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:07.692 16:26:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:07.692 16:26:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:07.692 16:26:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:07.692 16:26:27 -- common/autotest_common.sh@10 -- # set +x 00:17:07.692 [2024-11-09 16:26:27.310716] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:07.692 [2024-11-09 16:26:27.310830] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72489 ] 00:17:07.692 [2024-11-09 16:26:27.455983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.951 [2024-11-09 16:26:27.595617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:07.951 [2024-11-09 16:26:27.595773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.517 16:26:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:08.517 16:26:28 -- common/autotest_common.sh@862 -- # return 0 00:17:08.517 16:26:28 -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:08.517 [2024-11-09 16:26:28.273731] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:08.517 [2024-11-09 16:26:28.273777] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:08.777 [2024-11-09 16:26:28.430796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.777 [2024-11-09 16:26:28.430832] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:08.777 [2024-11-09 16:26:28.430847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:08.777 [2024-11-09 16:26:28.430853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.777 [2024-11-09 16:26:28.432858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.777 [2024-11-09 16:26:28.432891] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:08.777 [2024-11-09 16:26:28.432900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.990 ms 00:17:08.777 [2024-11-09 16:26:28.432906] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.777 [2024-11-09 16:26:28.432966] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:08.777 [2024-11-09 16:26:28.433540] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:08.777 [2024-11-09 16:26:28.433567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.777 [2024-11-09 16:26:28.433574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:08.777 [2024-11-09 16:26:28.433582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:17:08.777 [2024-11-09 16:26:28.433588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.778 [2024-11-09 16:26:28.434615] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:08.778 [2024-11-09 16:26:28.444408] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.778 [2024-11-09 16:26:28.444440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:08.778 [2024-11-09 16:26:28.444449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.798 ms 00:17:08.778 [2024-11-09 16:26:28.444457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.778 [2024-11-09 16:26:28.444519] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.778 [2024-11-09 16:26:28.444529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:08.778 [2024-11-09 16:26:28.444535] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:08.778 [2024-11-09 16:26:28.444542] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.778 [2024-11-09 16:26:28.448970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.778 [2024-11-09 16:26:28.449000] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:08.778 [2024-11-09 16:26:28.449007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.390 ms 00:17:08.778 [2024-11-09 16:26:28.449014] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.778 [2024-11-09 16:26:28.449083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.778 [2024-11-09 16:26:28.449092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:08.778 [2024-11-09 16:26:28.449098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:08.778 [2024-11-09 16:26:28.449105] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.778 [2024-11-09 16:26:28.449124] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.778 [2024-11-09 16:26:28.449132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:08.778 [2024-11-09 16:26:28.449138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:08.778 [2024-11-09 16:26:28.449145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.778 [2024-11-09 16:26:28.449173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:08.778 [2024-11-09 16:26:28.451920] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.778 [2024-11-09 16:26:28.451945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:08.778 [2024-11-09 16:26:28.451954] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.753 ms 00:17:08.778 [2024-11-09 16:26:28.451959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.778 [2024-11-09 16:26:28.451990] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.778 [2024-11-09 16:26:28.451996] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:08.778 [2024-11-09 16:26:28.452003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:08.778 [2024-11-09 16:26:28.452010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.778 [2024-11-09 16:26:28.452027] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:08.778 [2024-11-09 16:26:28.452041] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:08.778 [2024-11-09 16:26:28.452068] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:08.778 [2024-11-09 16:26:28.452079] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:08.778 [2024-11-09 16:26:28.452136] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:08.778 [2024-11-09 16:26:28.452144] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:08.778 [2024-11-09 16:26:28.452156] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:08.778 [2024-11-09 16:26:28.452164] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:08.778 [2024-11-09 16:26:28.452172] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:08.778 [2024-11-09 16:26:28.452178] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:08.778 [2024-11-09 16:26:28.452184] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:08.778 [2024-11-09 16:26:28.452190] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:08.778 [2024-11-09 16:26:28.452198] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:08.778 [2024-11-09 16:26:28.452204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.778 [2024-11-09 16:26:28.452211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:08.778 [2024-11-09 16:26:28.452216] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:17:08.778 [2024-11-09 16:26:28.452231] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.778 [2024-11-09 16:26:28.452283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.778 [2024-11-09 16:26:28.452291] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:08.778 [2024-11-09 16:26:28.452297] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:08.778 [2024-11-09 16:26:28.452303] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.778 [2024-11-09 16:26:28.452360] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:08.778 [2024-11-09 16:26:28.452369] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:08.778 [2024-11-09 16:26:28.452375] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:08.778 [2024-11-09 16:26:28.452382] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.778 [2024-11-09 16:26:28.452388] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:08.778 [2024-11-09 16:26:28.452394] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:08.778 [2024-11-09 16:26:28.452399] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:08.778 [2024-11-09 16:26:28.452407] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:08.778 [2024-11-09 16:26:28.452413] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:08.778 [2024-11-09 16:26:28.452419] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:08.778 [2024-11-09 16:26:28.452424] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:08.778 [2024-11-09 16:26:28.452430] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:08.778 [2024-11-09 16:26:28.452434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:08.778 [2024-11-09 16:26:28.452443] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:08.778 [2024-11-09 16:26:28.452448] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:08.778 [2024-11-09 16:26:28.452454] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.778 [2024-11-09 16:26:28.452459] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:08.778 [2024-11-09 16:26:28.452465] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:08.778 [2024-11-09 16:26:28.452470] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.778 [2024-11-09 16:26:28.452476] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:08.778 [2024-11-09 16:26:28.452481] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:08.778 [2024-11-09 16:26:28.452488] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:08.778 [2024-11-09 16:26:28.452493] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:08.778 [2024-11-09 16:26:28.452500] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:08.778 [2024-11-09 16:26:28.452505] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:08.778 [2024-11-09 16:26:28.452515] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:08.778 [2024-11-09 16:26:28.452520] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:08.778 [2024-11-09 16:26:28.452526] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:08.778 [2024-11-09 16:26:28.452530] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:08.778 [2024-11-09 16:26:28.452537] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:08.778 [2024-11-09 16:26:28.452541] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:08.778 [2024-11-09 16:26:28.452549] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:08.778 [2024-11-09 16:26:28.452554] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:08.778 [2024-11-09 16:26:28.452560] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:08.778 [2024-11-09 16:26:28.452565] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:08.778 [2024-11-09 16:26:28.452571] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:08.778 [2024-11-09 16:26:28.452576] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:08.778 [2024-11-09 16:26:28.452582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:08.778 [2024-11-09 16:26:28.452587] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:08.778 [2024-11-09 16:26:28.452594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:08.778 [2024-11-09 16:26:28.452598] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:08.778 [2024-11-09 16:26:28.452607] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:08.778 [2024-11-09 16:26:28.452612] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:08.779 [2024-11-09 16:26:28.452618] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:08.779 [2024-11-09 16:26:28.452624] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:08.779 [2024-11-09 16:26:28.452630] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:08.779 [2024-11-09 16:26:28.452636] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:08.779 [2024-11-09 16:26:28.452642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:08.779 [2024-11-09 16:26:28.452647] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:08.779 [2024-11-09 16:26:28.452653] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:08.779 [2024-11-09 16:26:28.452659] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:08.779 [2024-11-09 16:26:28.452667] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:08.779 [2024-11-09 16:26:28.452674] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:08.779 [2024-11-09 16:26:28.452680] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:08.779 [2024-11-09 16:26:28.452686] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:08.779 [2024-11-09 16:26:28.452695] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:08.779 [2024-11-09 16:26:28.452700] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:08.779 [2024-11-09 16:26:28.452707] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:08.779 [2024-11-09 16:26:28.452712] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:08.779 [2024-11-09 16:26:28.452719] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:08.779 [2024-11-09 16:26:28.452724] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:08.779 [2024-11-09 16:26:28.452731] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:08.779 [2024-11-09 16:26:28.452736] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:08.779 [2024-11-09 16:26:28.452743] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:08.779 [2024-11-09 16:26:28.452748] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:08.779 [2024-11-09 16:26:28.452755] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:08.779 [2024-11-09 16:26:28.452761] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:08.779 [2024-11-09 16:26:28.452768] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:08.779 [2024-11-09 16:26:28.452773] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:08.779 [2024-11-09 16:26:28.452780] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:08.779 [2024-11-09 16:26:28.452786] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:08.779 [2024-11-09 16:26:28.452794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.779 [2024-11-09 16:26:28.452800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:08.779 [2024-11-09 16:26:28.452807] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.464 ms 00:17:08.779 [2024-11-09 16:26:28.452812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.779 [2024-11-09 16:26:28.464772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.779 [2024-11-09 16:26:28.464800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.779 [2024-11-09 16:26:28.464811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.923 ms 00:17:08.779 [2024-11-09 16:26:28.464818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.779 [2024-11-09 16:26:28.464906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.779 [2024-11-09 16:26:28.464913] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:08.779 [2024-11-09 16:26:28.464921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:08.779 [2024-11-09 16:26:28.464926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.779 [2024-11-09 16:26:28.489350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.779 [2024-11-09 16:26:28.489378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.779 [2024-11-09 16:26:28.489388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.406 ms 00:17:08.779 [2024-11-09 16:26:28.489394] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.779 [2024-11-09 16:26:28.489442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.779 [2024-11-09 16:26:28.489450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.779 [2024-11-09 16:26:28.489458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:08.779 [2024-11-09 16:26:28.489464] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.779 [2024-11-09 16:26:28.489744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.779 [2024-11-09 16:26:28.489764] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.779 [2024-11-09 16:26:28.489774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:17:08.779 [2024-11-09 16:26:28.489779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.779 [2024-11-09 16:26:28.489870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.779 [2024-11-09 16:26:28.489882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.779 [2024-11-09 16:26:28.489891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:08.779 [2024-11-09 16:26:28.489896] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.779 [2024-11-09 16:26:28.501830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.779 [2024-11-09 16:26:28.501855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.779 [2024-11-09 16:26:28.501866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.917 ms 00:17:08.779 [2024-11-09 16:26:28.501871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.779 [2024-11-09 16:26:28.511937] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:08.779 [2024-11-09 16:26:28.511966] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:08.779 [2024-11-09 16:26:28.511976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.779 [2024-11-09 16:26:28.511982] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:08.779 [2024-11-09 16:26:28.511990] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.028 ms 00:17:08.779 [2024-11-09 16:26:28.511995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.779 [2024-11-09 16:26:28.530572] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.779 [2024-11-09 16:26:28.530600] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:08.779 [2024-11-09 16:26:28.530610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.521 ms 00:17:08.779 [2024-11-09 16:26:28.530616] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.779 [2024-11-09 16:26:28.539660] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.779 [2024-11-09 16:26:28.539692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:08.779 [2024-11-09 16:26:28.539701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.001 ms 00:17:08.779 [2024-11-09 16:26:28.539706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.038 [2024-11-09 16:26:28.548511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.038 [2024-11-09 16:26:28.548535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:09.038 [2024-11-09 16:26:28.548546] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.761 ms 00:17:09.038 [2024-11-09 16:26:28.548551] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.038 [2024-11-09 16:26:28.548823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.038 [2024-11-09 16:26:28.548838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:09.038 [2024-11-09 16:26:28.548848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:17:09.039 [2024-11-09 16:26:28.548853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.039 [2024-11-09 16:26:28.595255] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.039 [2024-11-09 16:26:28.595293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:09.039 [2024-11-09 16:26:28.595308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.381 ms 00:17:09.039 [2024-11-09 16:26:28.595314] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.039 [2024-11-09 16:26:28.603615] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:09.039 [2024-11-09 16:26:28.615320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.039 [2024-11-09 16:26:28.615354] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:09.039 [2024-11-09 16:26:28.615363] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.936 ms 00:17:09.039 [2024-11-09 16:26:28.615371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.039 [2024-11-09 16:26:28.615427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.039 [2024-11-09 16:26:28.615438] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:09.039 [2024-11-09 16:26:28.615445] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:09.039 [2024-11-09 16:26:28.615455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.039 [2024-11-09 16:26:28.615492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.039 [2024-11-09 16:26:28.615501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:09.039 [2024-11-09 16:26:28.615507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:09.039 [2024-11-09 16:26:28.615513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.039 [2024-11-09 16:26:28.616442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.039 [2024-11-09 16:26:28.616470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:09.039 [2024-11-09 16:26:28.616477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.913 ms 00:17:09.039 [2024-11-09 16:26:28.616484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.039 [2024-11-09 16:26:28.616508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.039 [2024-11-09 16:26:28.616518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:09.039 [2024-11-09 16:26:28.616524] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:09.039 [2024-11-09 16:26:28.616530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.039 [2024-11-09 16:26:28.616557] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:09.039 [2024-11-09 16:26:28.616567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.039 [2024-11-09 16:26:28.616573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:09.039 [2024-11-09 16:26:28.616580] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:09.039 [2024-11-09 16:26:28.616586] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.039 [2024-11-09 16:26:28.634941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.039 [2024-11-09 16:26:28.634971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:09.039 [2024-11-09 16:26:28.634981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.334 ms 00:17:09.039 [2024-11-09 16:26:28.634987] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.039 [2024-11-09 16:26:28.635060] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.039 [2024-11-09 16:26:28.635067] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:09.039 [2024-11-09 16:26:28.635075] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:09.039 [2024-11-09 16:26:28.635083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.039 [2024-11-09 16:26:28.635792] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:09.039 [2024-11-09 16:26:28.638239] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 204.784 ms, result 0 00:17:09.039 [2024-11-09 16:26:28.639274] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:09.039 Some configs were skipped because the RPC state that can call them passed over. 00:17:09.039 16:26:28 -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:09.298 [2024-11-09 16:26:28.860305] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.298 [2024-11-09 16:26:28.860340] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:09.298 [2024-11-09 16:26:28.860348] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.305 ms 00:17:09.298 [2024-11-09 16:26:28.860356] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.298 [2024-11-09 16:26:28.860383] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 18.383 ms, result 0 00:17:09.298 true 00:17:09.298 16:26:28 -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:09.298 [2024-11-09 16:26:29.066488] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.298 [2024-11-09 16:26:29.066519] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:09.298 [2024-11-09 16:26:29.066528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.873 ms 00:17:09.298 [2024-11-09 16:26:29.066533] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.298 [2024-11-09 16:26:29.066561] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 17.945 ms, result 0 00:17:09.557 true 00:17:09.557 16:26:29 -- ftl/trim.sh@81 -- # killprocess 72489 00:17:09.557 16:26:29 -- common/autotest_common.sh@936 -- # '[' -z 72489 ']' 00:17:09.557 16:26:29 -- common/autotest_common.sh@940 -- # kill -0 72489 00:17:09.557 16:26:29 -- common/autotest_common.sh@941 -- # uname 00:17:09.557 16:26:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.557 16:26:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72489 00:17:09.557 16:26:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:09.557 16:26:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:09.557 killing process with pid 72489 00:17:09.557 16:26:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72489' 00:17:09.557 16:26:29 -- common/autotest_common.sh@955 -- # kill 72489 00:17:09.557 16:26:29 -- common/autotest_common.sh@960 -- # wait 72489 00:17:10.124 [2024-11-09 16:26:29.640492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.124 [2024-11-09 16:26:29.640537] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:10.124 [2024-11-09 16:26:29.640547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:10.124 [2024-11-09 16:26:29.640555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.124 [2024-11-09 16:26:29.640574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:10.124 [2024-11-09 16:26:29.642624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.124 [2024-11-09 16:26:29.642650] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:10.124 [2024-11-09 16:26:29.642662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.037 ms 00:17:10.124 [2024-11-09 16:26:29.642668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.124 [2024-11-09 16:26:29.642897] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.124 [2024-11-09 16:26:29.642915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:10.124 [2024-11-09 16:26:29.642924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:10.124 [2024-11-09 16:26:29.642929] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.124 [2024-11-09 16:26:29.646206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.124 [2024-11-09 16:26:29.646238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:10.124 [2024-11-09 16:26:29.646249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.259 ms 00:17:10.124 [2024-11-09 16:26:29.646255] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.124 [2024-11-09 16:26:29.651530] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.124 [2024-11-09 16:26:29.651564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:10.124 [2024-11-09 16:26:29.651573] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.246 ms 00:17:10.124 [2024-11-09 16:26:29.651579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.124 [2024-11-09 16:26:29.659205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.124 [2024-11-09 16:26:29.659250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:10.124 [2024-11-09 16:26:29.659262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.580 ms 00:17:10.124 [2024-11-09 16:26:29.659267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.124 [2024-11-09 16:26:29.665695] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.124 [2024-11-09 16:26:29.665723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:10.124 [2024-11-09 16:26:29.665733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.395 ms 00:17:10.124 [2024-11-09 16:26:29.665739] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.124 [2024-11-09 16:26:29.665839] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.124 [2024-11-09 16:26:29.665846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:10.125 [2024-11-09 16:26:29.665854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:17:10.125 [2024-11-09 16:26:29.665859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.125 [2024-11-09 16:26:29.673844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.125 [2024-11-09 16:26:29.673870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:10.125 [2024-11-09 16:26:29.673878] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.968 ms 00:17:10.125 [2024-11-09 16:26:29.673884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.125 [2024-11-09 16:26:29.681134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.125 [2024-11-09 16:26:29.681173] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:10.125 [2024-11-09 16:26:29.681185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.218 ms 00:17:10.125 [2024-11-09 16:26:29.681190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.125 [2024-11-09 16:26:29.688351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.125 [2024-11-09 16:26:29.688377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:10.125 [2024-11-09 16:26:29.688385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.130 ms 00:17:10.125 [2024-11-09 16:26:29.688390] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.125 [2024-11-09 16:26:29.695622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.125 [2024-11-09 16:26:29.695646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:10.125 [2024-11-09 16:26:29.695654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.179 ms 00:17:10.125 [2024-11-09 16:26:29.695659] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.125 [2024-11-09 16:26:29.695687] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:10.125 [2024-11-09 16:26:29.695698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.695997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:10.125 [2024-11-09 16:26:29.696130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:10.126 [2024-11-09 16:26:29.696366] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:10.126 [2024-11-09 16:26:29.696375] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0532df08-a2cd-46ce-aed4-05877063f95a 00:17:10.126 [2024-11-09 16:26:29.696381] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:10.126 [2024-11-09 16:26:29.696387] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:10.126 [2024-11-09 16:26:29.696393] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:10.126 [2024-11-09 16:26:29.696400] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:10.126 [2024-11-09 16:26:29.696405] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:10.126 [2024-11-09 16:26:29.696412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:10.126 [2024-11-09 16:26:29.696420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:10.126 [2024-11-09 16:26:29.696426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:10.126 [2024-11-09 16:26:29.696431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:10.126 [2024-11-09 16:26:29.696438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.126 [2024-11-09 16:26:29.696444] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:10.126 [2024-11-09 16:26:29.696451] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:17:10.126 [2024-11-09 16:26:29.696458] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.126 [2024-11-09 16:26:29.706309] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.126 [2024-11-09 16:26:29.706334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:10.126 [2024-11-09 16:26:29.706345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.834 ms 00:17:10.126 [2024-11-09 16:26:29.706350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.126 [2024-11-09 16:26:29.706518] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:10.126 [2024-11-09 16:26:29.706525] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:10.126 [2024-11-09 16:26:29.706534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.133 ms 00:17:10.126 [2024-11-09 16:26:29.706540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.126 [2024-11-09 16:26:29.742003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.126 [2024-11-09 16:26:29.742031] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:10.126 [2024-11-09 16:26:29.742040] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.126 [2024-11-09 16:26:29.742046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.126 [2024-11-09 16:26:29.742106] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.126 [2024-11-09 16:26:29.742113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:10.126 [2024-11-09 16:26:29.742122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.126 [2024-11-09 16:26:29.742127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.126 [2024-11-09 16:26:29.742161] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.126 [2024-11-09 16:26:29.742167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:10.126 [2024-11-09 16:26:29.742176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.126 [2024-11-09 16:26:29.742182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.126 [2024-11-09 16:26:29.742198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.126 [2024-11-09 16:26:29.742204] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:10.126 [2024-11-09 16:26:29.742211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.126 [2024-11-09 16:26:29.742218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.126 [2024-11-09 16:26:29.803716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.126 [2024-11-09 16:26:29.803752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:10.126 [2024-11-09 16:26:29.803762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.126 [2024-11-09 16:26:29.803769] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.126 [2024-11-09 16:26:29.826439] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.126 [2024-11-09 16:26:29.826467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:10.126 [2024-11-09 16:26:29.826476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.126 [2024-11-09 16:26:29.826484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.126 [2024-11-09 16:26:29.826523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.126 [2024-11-09 16:26:29.826530] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:10.126 [2024-11-09 16:26:29.826539] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.126 [2024-11-09 16:26:29.826544] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.126 [2024-11-09 16:26:29.826570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.126 [2024-11-09 16:26:29.826575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:10.126 [2024-11-09 16:26:29.826583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.126 [2024-11-09 16:26:29.826588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.126 [2024-11-09 16:26:29.826663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.127 [2024-11-09 16:26:29.826671] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:10.127 [2024-11-09 16:26:29.826678] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.127 [2024-11-09 16:26:29.826684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.127 [2024-11-09 16:26:29.826709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.127 [2024-11-09 16:26:29.826716] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:10.127 [2024-11-09 16:26:29.826724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.127 [2024-11-09 16:26:29.826729] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.127 [2024-11-09 16:26:29.826760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.127 [2024-11-09 16:26:29.826766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:10.127 [2024-11-09 16:26:29.826775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.127 [2024-11-09 16:26:29.826781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.127 [2024-11-09 16:26:29.826818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:10.127 [2024-11-09 16:26:29.826825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:10.127 [2024-11-09 16:26:29.826832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:10.127 [2024-11-09 16:26:29.826837] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:10.127 [2024-11-09 16:26:29.826942] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 186.433 ms, result 0 00:17:11.060 16:26:30 -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:11.060 16:26:30 -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:11.060 [2024-11-09 16:26:30.525254] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:11.060 [2024-11-09 16:26:30.525369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72536 ] 00:17:11.060 [2024-11-09 16:26:30.675428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.319 [2024-11-09 16:26:30.865396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.319 [2024-11-09 16:26:31.070377] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:11.319 [2024-11-09 16:26:31.070426] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:11.578 [2024-11-09 16:26:31.215577] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.578 [2024-11-09 16:26:31.215614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:11.578 [2024-11-09 16:26:31.215623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:11.578 [2024-11-09 16:26:31.215629] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.578 [2024-11-09 16:26:31.217724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.578 [2024-11-09 16:26:31.217754] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:11.578 [2024-11-09 16:26:31.217762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.083 ms 00:17:11.578 [2024-11-09 16:26:31.217767] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.578 [2024-11-09 16:26:31.217828] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:11.578 [2024-11-09 16:26:31.218479] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:11.578 [2024-11-09 16:26:31.218505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.578 [2024-11-09 16:26:31.218512] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:11.578 [2024-11-09 16:26:31.218518] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:17:11.578 [2024-11-09 16:26:31.218524] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.578 [2024-11-09 16:26:31.219537] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:11.578 [2024-11-09 16:26:31.229320] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.578 [2024-11-09 16:26:31.229346] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:11.578 [2024-11-09 16:26:31.229354] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.784 ms 00:17:11.578 [2024-11-09 16:26:31.229360] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.578 [2024-11-09 16:26:31.229426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.578 [2024-11-09 16:26:31.229435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:11.578 [2024-11-09 16:26:31.229442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:17:11.578 [2024-11-09 16:26:31.229447] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.578 [2024-11-09 16:26:31.233893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.578 [2024-11-09 16:26:31.233916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:11.578 [2024-11-09 16:26:31.233923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.416 ms 00:17:11.578 [2024-11-09 16:26:31.233933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.578 [2024-11-09 16:26:31.234015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.578 [2024-11-09 16:26:31.234022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:11.578 [2024-11-09 16:26:31.234029] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:11.578 [2024-11-09 16:26:31.234035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.578 [2024-11-09 16:26:31.234054] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.578 [2024-11-09 16:26:31.234060] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:11.578 [2024-11-09 16:26:31.234066] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:11.578 [2024-11-09 16:26:31.234071] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.578 [2024-11-09 16:26:31.234092] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:11.578 [2024-11-09 16:26:31.236891] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.578 [2024-11-09 16:26:31.236912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:11.578 [2024-11-09 16:26:31.236919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.807 ms 00:17:11.578 [2024-11-09 16:26:31.236926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.578 [2024-11-09 16:26:31.236956] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.578 [2024-11-09 16:26:31.236963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:11.578 [2024-11-09 16:26:31.236969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:11.578 [2024-11-09 16:26:31.236974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.578 [2024-11-09 16:26:31.236987] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:11.578 [2024-11-09 16:26:31.237001] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:11.578 [2024-11-09 16:26:31.237026] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:11.579 [2024-11-09 16:26:31.237040] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:11.579 [2024-11-09 16:26:31.237097] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:11.579 [2024-11-09 16:26:31.237105] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:11.579 [2024-11-09 16:26:31.237112] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:11.579 [2024-11-09 16:26:31.237120] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:11.579 [2024-11-09 16:26:31.237126] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:11.579 [2024-11-09 16:26:31.237132] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:11.579 [2024-11-09 16:26:31.237138] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:11.579 [2024-11-09 16:26:31.237143] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:11.579 [2024-11-09 16:26:31.237162] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:11.579 [2024-11-09 16:26:31.237172] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.579 [2024-11-09 16:26:31.237179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:11.579 [2024-11-09 16:26:31.237185] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:17:11.579 [2024-11-09 16:26:31.237191] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.579 [2024-11-09 16:26:31.237250] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.579 [2024-11-09 16:26:31.237258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:11.579 [2024-11-09 16:26:31.237263] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:11.579 [2024-11-09 16:26:31.237269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.579 [2024-11-09 16:26:31.237324] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:11.579 [2024-11-09 16:26:31.237332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:11.579 [2024-11-09 16:26:31.237338] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:11.579 [2024-11-09 16:26:31.237344] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.579 [2024-11-09 16:26:31.237350] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:11.579 [2024-11-09 16:26:31.237355] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:11.579 [2024-11-09 16:26:31.237360] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:11.579 [2024-11-09 16:26:31.237366] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:11.579 [2024-11-09 16:26:31.237371] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:11.579 [2024-11-09 16:26:31.237375] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:11.579 [2024-11-09 16:26:31.237381] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:11.579 [2024-11-09 16:26:31.237386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:11.579 [2024-11-09 16:26:31.237391] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:11.579 [2024-11-09 16:26:31.237396] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:11.579 [2024-11-09 16:26:31.237405] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:11.579 [2024-11-09 16:26:31.237411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.579 [2024-11-09 16:26:31.237416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:11.579 [2024-11-09 16:26:31.237422] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:11.579 [2024-11-09 16:26:31.237427] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.579 [2024-11-09 16:26:31.237432] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:11.579 [2024-11-09 16:26:31.237437] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:11.579 [2024-11-09 16:26:31.237442] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:11.579 [2024-11-09 16:26:31.237447] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:11.579 [2024-11-09 16:26:31.237452] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:11.579 [2024-11-09 16:26:31.237457] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:11.579 [2024-11-09 16:26:31.237461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:11.579 [2024-11-09 16:26:31.237466] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:11.579 [2024-11-09 16:26:31.237471] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:11.579 [2024-11-09 16:26:31.237475] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:11.579 [2024-11-09 16:26:31.237480] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:11.579 [2024-11-09 16:26:31.237484] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:11.579 [2024-11-09 16:26:31.237489] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:11.579 [2024-11-09 16:26:31.237494] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:11.579 [2024-11-09 16:26:31.237499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:11.579 [2024-11-09 16:26:31.237504] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:11.579 [2024-11-09 16:26:31.237509] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:11.579 [2024-11-09 16:26:31.237513] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:11.579 [2024-11-09 16:26:31.237518] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:11.579 [2024-11-09 16:26:31.237523] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:11.579 [2024-11-09 16:26:31.237527] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:11.579 [2024-11-09 16:26:31.237532] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:11.579 [2024-11-09 16:26:31.237538] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:11.579 [2024-11-09 16:26:31.237543] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:11.579 [2024-11-09 16:26:31.237551] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:11.579 [2024-11-09 16:26:31.237556] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:11.579 [2024-11-09 16:26:31.237561] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:11.579 [2024-11-09 16:26:31.237566] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:11.579 [2024-11-09 16:26:31.237572] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:11.579 [2024-11-09 16:26:31.237577] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:11.579 [2024-11-09 16:26:31.237581] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:11.579 [2024-11-09 16:26:31.237587] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:11.579 [2024-11-09 16:26:31.237594] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:11.579 [2024-11-09 16:26:31.237600] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:11.579 [2024-11-09 16:26:31.237606] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:11.579 [2024-11-09 16:26:31.237611] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:11.579 [2024-11-09 16:26:31.237616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:11.579 [2024-11-09 16:26:31.237622] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:11.579 [2024-11-09 16:26:31.237627] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:11.579 [2024-11-09 16:26:31.237632] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:11.579 [2024-11-09 16:26:31.237638] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:11.579 [2024-11-09 16:26:31.237643] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:11.579 [2024-11-09 16:26:31.237648] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:11.579 [2024-11-09 16:26:31.237653] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:11.579 [2024-11-09 16:26:31.237659] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:11.579 [2024-11-09 16:26:31.237664] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:11.579 [2024-11-09 16:26:31.237669] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:11.579 [2024-11-09 16:26:31.237679] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:11.579 [2024-11-09 16:26:31.237685] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:11.579 [2024-11-09 16:26:31.237690] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:11.579 [2024-11-09 16:26:31.237695] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:11.579 [2024-11-09 16:26:31.237701] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:11.579 [2024-11-09 16:26:31.237707] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.579 [2024-11-09 16:26:31.237712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:11.579 [2024-11-09 16:26:31.237718] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:17:11.579 [2024-11-09 16:26:31.237724] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.579 [2024-11-09 16:26:31.249700] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.579 [2024-11-09 16:26:31.249727] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:11.579 [2024-11-09 16:26:31.249735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.943 ms 00:17:11.579 [2024-11-09 16:26:31.249741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.579 [2024-11-09 16:26:31.249830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.580 [2024-11-09 16:26:31.249838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:11.580 [2024-11-09 16:26:31.249844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:11.580 [2024-11-09 16:26:31.249850] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.580 [2024-11-09 16:26:31.286487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.580 [2024-11-09 16:26:31.286515] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:11.580 [2024-11-09 16:26:31.286525] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.620 ms 00:17:11.580 [2024-11-09 16:26:31.286532] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.580 [2024-11-09 16:26:31.286590] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.580 [2024-11-09 16:26:31.286599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:11.580 [2024-11-09 16:26:31.286609] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:11.580 [2024-11-09 16:26:31.286615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.580 [2024-11-09 16:26:31.286925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.580 [2024-11-09 16:26:31.286949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:11.580 [2024-11-09 16:26:31.286956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:17:11.580 [2024-11-09 16:26:31.286962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.580 [2024-11-09 16:26:31.287090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.580 [2024-11-09 16:26:31.287100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:11.580 [2024-11-09 16:26:31.287106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:11.580 [2024-11-09 16:26:31.287112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.580 [2024-11-09 16:26:31.299745] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.580 [2024-11-09 16:26:31.299771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:11.580 [2024-11-09 16:26:31.299780] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.614 ms 00:17:11.580 [2024-11-09 16:26:31.299788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.580 [2024-11-09 16:26:31.310061] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:11.580 [2024-11-09 16:26:31.310087] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:11.580 [2024-11-09 16:26:31.310095] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.580 [2024-11-09 16:26:31.310102] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:11.580 [2024-11-09 16:26:31.310109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.226 ms 00:17:11.580 [2024-11-09 16:26:31.310115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.580 [2024-11-09 16:26:31.328984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.580 [2024-11-09 16:26:31.329014] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:11.580 [2024-11-09 16:26:31.329023] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.812 ms 00:17:11.580 [2024-11-09 16:26:31.329029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.580 [2024-11-09 16:26:31.338332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.580 [2024-11-09 16:26:31.338357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:11.580 [2024-11-09 16:26:31.338370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.248 ms 00:17:11.580 [2024-11-09 16:26:31.338376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.842 [2024-11-09 16:26:31.347122] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.843 [2024-11-09 16:26:31.347145] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:11.843 [2024-11-09 16:26:31.347153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.704 ms 00:17:11.843 [2024-11-09 16:26:31.347159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.843 [2024-11-09 16:26:31.347475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.843 [2024-11-09 16:26:31.347492] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:11.843 [2024-11-09 16:26:31.347499] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.254 ms 00:17:11.843 [2024-11-09 16:26:31.347507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.843 [2024-11-09 16:26:31.393083] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.843 [2024-11-09 16:26:31.393113] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:11.843 [2024-11-09 16:26:31.393122] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.557 ms 00:17:11.843 [2024-11-09 16:26:31.393133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.843 [2024-11-09 16:26:31.401109] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:11.843 [2024-11-09 16:26:31.412678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.843 [2024-11-09 16:26:31.412703] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:11.843 [2024-11-09 16:26:31.412712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.469 ms 00:17:11.843 [2024-11-09 16:26:31.412718] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.843 [2024-11-09 16:26:31.412768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.843 [2024-11-09 16:26:31.412775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:11.843 [2024-11-09 16:26:31.412784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:11.843 [2024-11-09 16:26:31.412790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.843 [2024-11-09 16:26:31.412827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.843 [2024-11-09 16:26:31.412833] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:11.843 [2024-11-09 16:26:31.412839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:11.843 [2024-11-09 16:26:31.412845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.843 [2024-11-09 16:26:31.413833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.843 [2024-11-09 16:26:31.413858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:11.843 [2024-11-09 16:26:31.413865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.971 ms 00:17:11.843 [2024-11-09 16:26:31.413871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.843 [2024-11-09 16:26:31.413896] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.843 [2024-11-09 16:26:31.413905] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:11.843 [2024-11-09 16:26:31.413911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:11.843 [2024-11-09 16:26:31.413917] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.843 [2024-11-09 16:26:31.413943] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:11.843 [2024-11-09 16:26:31.413950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.843 [2024-11-09 16:26:31.413955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:11.843 [2024-11-09 16:26:31.413961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:11.843 [2024-11-09 16:26:31.413966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.843 [2024-11-09 16:26:31.432140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.843 [2024-11-09 16:26:31.432166] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:11.843 [2024-11-09 16:26:31.432175] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.158 ms 00:17:11.843 [2024-11-09 16:26:31.432181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.843 [2024-11-09 16:26:31.432257] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:11.843 [2024-11-09 16:26:31.432265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:11.843 [2024-11-09 16:26:31.432272] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:11.843 [2024-11-09 16:26:31.432277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:11.843 [2024-11-09 16:26:31.432887] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:11.843 [2024-11-09 16:26:31.435385] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 217.094 ms, result 0 00:17:11.843 [2024-11-09 16:26:31.436092] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:11.843 [2024-11-09 16:26:31.451297] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:12.785  [2024-11-09T16:26:33.499Z] Copying: 21/256 [MB] (21 MBps) [2024-11-09T16:26:34.506Z] Copying: 37/256 [MB] (16 MBps) [2024-11-09T16:26:35.894Z] Copying: 50/256 [MB] (12 MBps) [2024-11-09T16:26:36.466Z] Copying: 65/256 [MB] (14 MBps) [2024-11-09T16:26:37.853Z] Copying: 79/256 [MB] (14 MBps) [2024-11-09T16:26:38.800Z] Copying: 99/256 [MB] (20 MBps) [2024-11-09T16:26:39.749Z] Copying: 114/256 [MB] (15 MBps) [2024-11-09T16:26:40.693Z] Copying: 125/256 [MB] (10 MBps) [2024-11-09T16:26:41.634Z] Copying: 136/256 [MB] (10 MBps) [2024-11-09T16:26:42.580Z] Copying: 152/256 [MB] (15 MBps) [2024-11-09T16:26:43.524Z] Copying: 165/256 [MB] (13 MBps) [2024-11-09T16:26:44.470Z] Copying: 180/256 [MB] (15 MBps) [2024-11-09T16:26:45.858Z] Copying: 196/256 [MB] (15 MBps) [2024-11-09T16:26:46.802Z] Copying: 213/256 [MB] (17 MBps) [2024-11-09T16:26:47.746Z] Copying: 238/256 [MB] (24 MBps) [2024-11-09T16:26:47.746Z] Copying: 255/256 [MB] (17 MBps) [2024-11-09T16:26:47.746Z] Copying: 256/256 [MB] (average 15 MBps)[2024-11-09 16:26:47.455913] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:27.976 [2024-11-09 16:26:47.466259] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.976 [2024-11-09 16:26:47.466314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:27.976 [2024-11-09 16:26:47.466329] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:27.976 [2024-11-09 16:26:47.466338] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.976 [2024-11-09 16:26:47.466363] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:27.976 [2024-11-09 16:26:47.469133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.976 [2024-11-09 16:26:47.469184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:27.976 [2024-11-09 16:26:47.469196] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.755 ms 00:17:27.976 [2024-11-09 16:26:47.469204] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.976 [2024-11-09 16:26:47.469520] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.976 [2024-11-09 16:26:47.469532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:27.976 [2024-11-09 16:26:47.469542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:17:27.976 [2024-11-09 16:26:47.469553] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.976 [2024-11-09 16:26:47.473278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.976 [2024-11-09 16:26:47.473302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:27.976 [2024-11-09 16:26:47.473313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.709 ms 00:17:27.976 [2024-11-09 16:26:47.473322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.976 [2024-11-09 16:26:47.480427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.976 [2024-11-09 16:26:47.480625] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:27.976 [2024-11-09 16:26:47.480645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.070 ms 00:17:27.976 [2024-11-09 16:26:47.480655] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.976 [2024-11-09 16:26:47.506845] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.976 [2024-11-09 16:26:47.506892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:27.976 [2024-11-09 16:26:47.506905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.104 ms 00:17:27.976 [2024-11-09 16:26:47.506912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.976 [2024-11-09 16:26:47.523847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.976 [2024-11-09 16:26:47.523894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:27.976 [2024-11-09 16:26:47.523906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.868 ms 00:17:27.976 [2024-11-09 16:26:47.523915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.976 [2024-11-09 16:26:47.524080] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.976 [2024-11-09 16:26:47.524091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:27.976 [2024-11-09 16:26:47.524100] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:17:27.976 [2024-11-09 16:26:47.524109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.976 [2024-11-09 16:26:47.550220] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.976 [2024-11-09 16:26:47.550406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:27.976 [2024-11-09 16:26:47.550426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.093 ms 00:17:27.977 [2024-11-09 16:26:47.550433] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.977 [2024-11-09 16:26:47.576183] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.977 [2024-11-09 16:26:47.576240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:27.977 [2024-11-09 16:26:47.576251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.630 ms 00:17:27.977 [2024-11-09 16:26:47.576258] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.977 [2024-11-09 16:26:47.600948] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.977 [2024-11-09 16:26:47.600992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:27.977 [2024-11-09 16:26:47.601003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.616 ms 00:17:27.977 [2024-11-09 16:26:47.601010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.977 [2024-11-09 16:26:47.625724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.977 [2024-11-09 16:26:47.625766] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:27.977 [2024-11-09 16:26:47.625777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.622 ms 00:17:27.977 [2024-11-09 16:26:47.625784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.977 [2024-11-09 16:26:47.625846] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:27.977 [2024-11-09 16:26:47.625863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.625997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:27.977 [2024-11-09 16:26:47.626486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:27.978 [2024-11-09 16:26:47.626675] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:27.978 [2024-11-09 16:26:47.626683] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0532df08-a2cd-46ce-aed4-05877063f95a 00:17:27.978 [2024-11-09 16:26:47.626692] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:27.978 [2024-11-09 16:26:47.626699] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:27.978 [2024-11-09 16:26:47.626707] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:27.978 [2024-11-09 16:26:47.626716] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:27.978 [2024-11-09 16:26:47.626723] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:27.978 [2024-11-09 16:26:47.626734] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:27.978 [2024-11-09 16:26:47.626745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:27.978 [2024-11-09 16:26:47.626752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:27.978 [2024-11-09 16:26:47.626758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:27.978 [2024-11-09 16:26:47.626766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.978 [2024-11-09 16:26:47.626774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:27.978 [2024-11-09 16:26:47.626782] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.921 ms 00:17:27.978 [2024-11-09 16:26:47.626789] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.978 [2024-11-09 16:26:47.640248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.978 [2024-11-09 16:26:47.640405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:27.978 [2024-11-09 16:26:47.640430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.425 ms 00:17:27.978 [2024-11-09 16:26:47.640438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.978 [2024-11-09 16:26:47.640682] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:27.978 [2024-11-09 16:26:47.640693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:27.978 [2024-11-09 16:26:47.640701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:17:27.978 [2024-11-09 16:26:47.640708] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.978 [2024-11-09 16:26:47.682279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.978 [2024-11-09 16:26:47.682329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:27.978 [2024-11-09 16:26:47.682347] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.978 [2024-11-09 16:26:47.682355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.978 [2024-11-09 16:26:47.682459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.978 [2024-11-09 16:26:47.682469] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:27.978 [2024-11-09 16:26:47.682478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.978 [2024-11-09 16:26:47.682486] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.978 [2024-11-09 16:26:47.682536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.978 [2024-11-09 16:26:47.682545] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:27.978 [2024-11-09 16:26:47.682553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.978 [2024-11-09 16:26:47.682566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:27.978 [2024-11-09 16:26:47.682584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:27.978 [2024-11-09 16:26:47.682592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:27.978 [2024-11-09 16:26:47.682599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:27.978 [2024-11-09 16:26:47.682606] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.239 [2024-11-09 16:26:47.762814] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.239 [2024-11-09 16:26:47.763024] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.239 [2024-11-09 16:26:47.763053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.239 [2024-11-09 16:26:47.763061] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.239 [2024-11-09 16:26:47.794699] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.239 [2024-11-09 16:26:47.794744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.239 [2024-11-09 16:26:47.794756] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.239 [2024-11-09 16:26:47.794765] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.239 [2024-11-09 16:26:47.794826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.239 [2024-11-09 16:26:47.794836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.239 [2024-11-09 16:26:47.794845] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.239 [2024-11-09 16:26:47.794853] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.239 [2024-11-09 16:26:47.794893] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.239 [2024-11-09 16:26:47.794902] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.239 [2024-11-09 16:26:47.794911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.239 [2024-11-09 16:26:47.794919] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.239 [2024-11-09 16:26:47.795023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.239 [2024-11-09 16:26:47.795033] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.239 [2024-11-09 16:26:47.795042] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.239 [2024-11-09 16:26:47.795050] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.239 [2024-11-09 16:26:47.795086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.239 [2024-11-09 16:26:47.795096] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:28.239 [2024-11-09 16:26:47.795104] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.239 [2024-11-09 16:26:47.795112] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.239 [2024-11-09 16:26:47.795157] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.239 [2024-11-09 16:26:47.795167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.239 [2024-11-09 16:26:47.795176] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.239 [2024-11-09 16:26:47.795184] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.239 [2024-11-09 16:26:47.795264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:28.239 [2024-11-09 16:26:47.795278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.239 [2024-11-09 16:26:47.795286] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:28.239 [2024-11-09 16:26:47.795295] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.239 [2024-11-09 16:26:47.795455] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 329.218 ms, result 0 00:17:29.185 00:17:29.185 00:17:29.185 16:26:48 -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:29.185 16:26:48 -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:29.756 16:26:49 -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:29.756 [2024-11-09 16:26:49.364304] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:29.756 [2024-11-09 16:26:49.364468] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72736 ] 00:17:29.756 [2024-11-09 16:26:49.519281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.017 [2024-11-09 16:26:49.739117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.277 [2024-11-09 16:26:50.025334] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:30.277 [2024-11-09 16:26:50.025656] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:30.540 [2024-11-09 16:26:50.185566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.540 [2024-11-09 16:26:50.185813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:30.540 [2024-11-09 16:26:50.185838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:30.540 [2024-11-09 16:26:50.185848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.540 [2024-11-09 16:26:50.188766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.540 [2024-11-09 16:26:50.188819] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:30.540 [2024-11-09 16:26:50.188831] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.890 ms 00:17:30.540 [2024-11-09 16:26:50.188839] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.540 [2024-11-09 16:26:50.188952] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:30.540 [2024-11-09 16:26:50.189887] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:30.540 [2024-11-09 16:26:50.190015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.540 [2024-11-09 16:26:50.190074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:30.540 [2024-11-09 16:26:50.190106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:17:30.540 [2024-11-09 16:26:50.190126] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.540 [2024-11-09 16:26:50.191882] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:30.540 [2024-11-09 16:26:50.206303] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.540 [2024-11-09 16:26:50.206479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:30.540 [2024-11-09 16:26:50.206500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.423 ms 00:17:30.540 [2024-11-09 16:26:50.206508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.540 [2024-11-09 16:26:50.206703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.540 [2024-11-09 16:26:50.206729] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:30.540 [2024-11-09 16:26:50.206739] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:30.540 [2024-11-09 16:26:50.206747] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.540 [2024-11-09 16:26:50.214690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.540 [2024-11-09 16:26:50.214734] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:30.540 [2024-11-09 16:26:50.214744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.895 ms 00:17:30.540 [2024-11-09 16:26:50.214756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.540 [2024-11-09 16:26:50.214873] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.540 [2024-11-09 16:26:50.214883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:30.540 [2024-11-09 16:26:50.214892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:30.540 [2024-11-09 16:26:50.214900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.540 [2024-11-09 16:26:50.214927] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.540 [2024-11-09 16:26:50.214937] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:30.540 [2024-11-09 16:26:50.214949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:30.540 [2024-11-09 16:26:50.214957] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.540 [2024-11-09 16:26:50.214988] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:30.540 [2024-11-09 16:26:50.219078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.540 [2024-11-09 16:26:50.219119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:30.540 [2024-11-09 16:26:50.219130] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.105 ms 00:17:30.540 [2024-11-09 16:26:50.219141] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.540 [2024-11-09 16:26:50.219216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.540 [2024-11-09 16:26:50.219247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:30.540 [2024-11-09 16:26:50.219258] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:17:30.540 [2024-11-09 16:26:50.219265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.540 [2024-11-09 16:26:50.219287] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:30.540 [2024-11-09 16:26:50.219309] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:30.540 [2024-11-09 16:26:50.219343] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:30.540 [2024-11-09 16:26:50.219363] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:30.540 [2024-11-09 16:26:50.219438] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:30.540 [2024-11-09 16:26:50.219450] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:30.540 [2024-11-09 16:26:50.219460] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:30.540 [2024-11-09 16:26:50.219470] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:30.540 [2024-11-09 16:26:50.219480] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:30.540 [2024-11-09 16:26:50.219488] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:30.540 [2024-11-09 16:26:50.219496] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:30.540 [2024-11-09 16:26:50.219503] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:30.540 [2024-11-09 16:26:50.219513] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:30.540 [2024-11-09 16:26:50.219521] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.540 [2024-11-09 16:26:50.219528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:30.540 [2024-11-09 16:26:50.219536] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:17:30.540 [2024-11-09 16:26:50.219543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.540 [2024-11-09 16:26:50.219608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.540 [2024-11-09 16:26:50.219617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:30.540 [2024-11-09 16:26:50.219625] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:30.540 [2024-11-09 16:26:50.219632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.540 [2024-11-09 16:26:50.219708] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:30.540 [2024-11-09 16:26:50.219719] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:30.540 [2024-11-09 16:26:50.219728] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.540 [2024-11-09 16:26:50.219736] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.540 [2024-11-09 16:26:50.219744] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:30.540 [2024-11-09 16:26:50.219750] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:30.540 [2024-11-09 16:26:50.219757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:30.541 [2024-11-09 16:26:50.219763] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:30.541 [2024-11-09 16:26:50.219770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:30.541 [2024-11-09 16:26:50.219777] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.541 [2024-11-09 16:26:50.219783] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:30.541 [2024-11-09 16:26:50.219790] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:30.541 [2024-11-09 16:26:50.219799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:30.541 [2024-11-09 16:26:50.219807] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:30.541 [2024-11-09 16:26:50.219821] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:30.541 [2024-11-09 16:26:50.219827] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.541 [2024-11-09 16:26:50.219834] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:30.541 [2024-11-09 16:26:50.219841] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:30.541 [2024-11-09 16:26:50.219848] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.541 [2024-11-09 16:26:50.219854] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:30.541 [2024-11-09 16:26:50.219861] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:30.541 [2024-11-09 16:26:50.219868] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:30.541 [2024-11-09 16:26:50.219875] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:30.541 [2024-11-09 16:26:50.219882] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:30.541 [2024-11-09 16:26:50.219888] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:30.541 [2024-11-09 16:26:50.219895] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:30.541 [2024-11-09 16:26:50.219902] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:30.541 [2024-11-09 16:26:50.219908] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:30.541 [2024-11-09 16:26:50.219914] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:30.541 [2024-11-09 16:26:50.219921] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:30.541 [2024-11-09 16:26:50.219927] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:30.541 [2024-11-09 16:26:50.219934] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:30.541 [2024-11-09 16:26:50.219941] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:30.541 [2024-11-09 16:26:50.219948] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:30.541 [2024-11-09 16:26:50.219954] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:30.541 [2024-11-09 16:26:50.219961] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:30.541 [2024-11-09 16:26:50.219967] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.541 [2024-11-09 16:26:50.219974] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:30.541 [2024-11-09 16:26:50.219981] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:30.541 [2024-11-09 16:26:50.219988] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:30.541 [2024-11-09 16:26:50.219994] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:30.541 [2024-11-09 16:26:50.220002] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:30.541 [2024-11-09 16:26:50.220009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:30.541 [2024-11-09 16:26:50.220019] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:30.541 [2024-11-09 16:26:50.220028] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:30.541 [2024-11-09 16:26:50.220036] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:30.541 [2024-11-09 16:26:50.220043] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:30.541 [2024-11-09 16:26:50.220050] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:30.541 [2024-11-09 16:26:50.220057] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:30.541 [2024-11-09 16:26:50.220063] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:30.541 [2024-11-09 16:26:50.220071] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:30.541 [2024-11-09 16:26:50.220080] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.541 [2024-11-09 16:26:50.220089] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:30.541 [2024-11-09 16:26:50.220096] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:30.541 [2024-11-09 16:26:50.220103] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:30.541 [2024-11-09 16:26:50.220110] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:30.541 [2024-11-09 16:26:50.220117] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:30.541 [2024-11-09 16:26:50.220124] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:30.541 [2024-11-09 16:26:50.220132] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:30.541 [2024-11-09 16:26:50.220139] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:30.541 [2024-11-09 16:26:50.220146] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:30.541 [2024-11-09 16:26:50.220153] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:30.541 [2024-11-09 16:26:50.220160] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:30.541 [2024-11-09 16:26:50.220168] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:30.541 [2024-11-09 16:26:50.220176] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:30.541 [2024-11-09 16:26:50.220184] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:30.541 [2024-11-09 16:26:50.220198] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:30.541 [2024-11-09 16:26:50.220206] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:30.541 [2024-11-09 16:26:50.220214] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:30.541 [2024-11-09 16:26:50.220235] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:30.541 [2024-11-09 16:26:50.220243] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:30.541 [2024-11-09 16:26:50.220252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.541 [2024-11-09 16:26:50.220260] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:30.541 [2024-11-09 16:26:50.220269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:17:30.541 [2024-11-09 16:26:50.220276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.541 [2024-11-09 16:26:50.238404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.541 [2024-11-09 16:26:50.238446] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:30.541 [2024-11-09 16:26:50.238458] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.084 ms 00:17:30.541 [2024-11-09 16:26:50.238468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.541 [2024-11-09 16:26:50.238599] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.541 [2024-11-09 16:26:50.238610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:30.541 [2024-11-09 16:26:50.238618] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:30.541 [2024-11-09 16:26:50.238627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.541 [2024-11-09 16:26:50.288954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.541 [2024-11-09 16:26:50.289007] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:30.541 [2024-11-09 16:26:50.289020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.305 ms 00:17:30.541 [2024-11-09 16:26:50.289029] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.541 [2024-11-09 16:26:50.289114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.541 [2024-11-09 16:26:50.289126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:30.541 [2024-11-09 16:26:50.289139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:30.541 [2024-11-09 16:26:50.289172] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.541 [2024-11-09 16:26:50.289751] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.541 [2024-11-09 16:26:50.289793] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:30.541 [2024-11-09 16:26:50.289804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:17:30.541 [2024-11-09 16:26:50.289812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.541 [2024-11-09 16:26:50.289949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.541 [2024-11-09 16:26:50.289959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:30.541 [2024-11-09 16:26:50.289967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:17:30.541 [2024-11-09 16:26:50.289975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.541 [2024-11-09 16:26:50.307146] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.541 [2024-11-09 16:26:50.307189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:30.541 [2024-11-09 16:26:50.307201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.145 ms 00:17:30.541 [2024-11-09 16:26:50.307213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.321415] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:30.803 [2024-11-09 16:26:50.321461] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:30.803 [2024-11-09 16:26:50.321473] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.321481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:30.803 [2024-11-09 16:26:50.321491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.118 ms 00:17:30.803 [2024-11-09 16:26:50.321499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.347423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.347476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:30.803 [2024-11-09 16:26:50.347488] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.833 ms 00:17:30.803 [2024-11-09 16:26:50.347496] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.360788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.360970] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:30.803 [2024-11-09 16:26:50.361002] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.204 ms 00:17:30.803 [2024-11-09 16:26:50.361009] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.373730] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.373773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:30.803 [2024-11-09 16:26:50.373785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.642 ms 00:17:30.803 [2024-11-09 16:26:50.373792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.374195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.374208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:30.803 [2024-11-09 16:26:50.374219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:17:30.803 [2024-11-09 16:26:50.374252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.440657] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.440715] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:30.803 [2024-11-09 16:26:50.440730] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.379 ms 00:17:30.803 [2024-11-09 16:26:50.440746] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.452382] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:30.803 [2024-11-09 16:26:50.471013] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.471236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:30.803 [2024-11-09 16:26:50.471259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.160 ms 00:17:30.803 [2024-11-09 16:26:50.471268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.471358] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.471369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:30.803 [2024-11-09 16:26:50.471383] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:30.803 [2024-11-09 16:26:50.471392] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.471452] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.471462] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:30.803 [2024-11-09 16:26:50.471470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:30.803 [2024-11-09 16:26:50.471478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.472874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.472917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:30.803 [2024-11-09 16:26:50.472928] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.375 ms 00:17:30.803 [2024-11-09 16:26:50.472936] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.472973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.472986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:30.803 [2024-11-09 16:26:50.472995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:30.803 [2024-11-09 16:26:50.473003] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.473042] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:30.803 [2024-11-09 16:26:50.473052] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.473061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:30.803 [2024-11-09 16:26:50.473069] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:30.803 [2024-11-09 16:26:50.473077] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.499724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.499778] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:30.803 [2024-11-09 16:26:50.499793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.620 ms 00:17:30.803 [2024-11-09 16:26:50.499802] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.499912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:30.803 [2024-11-09 16:26:50.499925] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:30.803 [2024-11-09 16:26:50.499934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:30.803 [2024-11-09 16:26:50.499942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:30.803 [2024-11-09 16:26:50.501023] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:30.803 [2024-11-09 16:26:50.504593] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 315.113 ms, result 0 00:17:30.803 [2024-11-09 16:26:50.505947] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:30.803 [2024-11-09 16:26:50.520106] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.417  [2024-11-09T16:26:51.187Z] Copying: 4096/4096 [kB] (average 11 MBps)[2024-11-09 16:26:50.886820] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:31.417 [2024-11-09 16:26:50.896004] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.417 [2024-11-09 16:26:50.896061] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:31.418 [2024-11-09 16:26:50.896074] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:31.418 [2024-11-09 16:26:50.896082] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.418 [2024-11-09 16:26:50.896107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:31.418 [2024-11-09 16:26:50.899136] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.418 [2024-11-09 16:26:50.899175] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:31.418 [2024-11-09 16:26:50.899187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.016 ms 00:17:31.418 [2024-11-09 16:26:50.899195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.418 [2024-11-09 16:26:50.902269] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.418 [2024-11-09 16:26:50.902315] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:31.418 [2024-11-09 16:26:50.902326] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.046 ms 00:17:31.418 [2024-11-09 16:26:50.902341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.418 [2024-11-09 16:26:50.906807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.418 [2024-11-09 16:26:50.906843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:31.418 [2024-11-09 16:26:50.906854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.447 ms 00:17:31.418 [2024-11-09 16:26:50.906862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.418 [2024-11-09 16:26:50.913735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.418 [2024-11-09 16:26:50.913923] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:31.418 [2024-11-09 16:26:50.913944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.839 ms 00:17:31.418 [2024-11-09 16:26:50.913959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.418 [2024-11-09 16:26:50.939524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.418 [2024-11-09 16:26:50.939570] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:31.418 [2024-11-09 16:26:50.939582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.505 ms 00:17:31.418 [2024-11-09 16:26:50.939589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.418 [2024-11-09 16:26:50.956981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.418 [2024-11-09 16:26:50.957189] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:31.418 [2024-11-09 16:26:50.957214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.328 ms 00:17:31.418 [2024-11-09 16:26:50.957243] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.418 [2024-11-09 16:26:50.957409] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.418 [2024-11-09 16:26:50.957422] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:31.418 [2024-11-09 16:26:50.957432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:17:31.418 [2024-11-09 16:26:50.957441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.418 [2024-11-09 16:26:50.984012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.418 [2024-11-09 16:26:50.984202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:31.418 [2024-11-09 16:26:50.984240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.553 ms 00:17:31.418 [2024-11-09 16:26:50.984248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.418 [2024-11-09 16:26:51.009848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.418 [2024-11-09 16:26:51.009894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:31.418 [2024-11-09 16:26:51.009906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.528 ms 00:17:31.418 [2024-11-09 16:26:51.009913] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.418 [2024-11-09 16:26:51.034493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.418 [2024-11-09 16:26:51.034536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:31.418 [2024-11-09 16:26:51.034547] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.519 ms 00:17:31.418 [2024-11-09 16:26:51.034554] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.418 [2024-11-09 16:26:51.059351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.418 [2024-11-09 16:26:51.059395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:31.418 [2024-11-09 16:26:51.059406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.708 ms 00:17:31.418 [2024-11-09 16:26:51.059413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.418 [2024-11-09 16:26:51.059474] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:31.418 [2024-11-09 16:26:51.059491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:31.418 [2024-11-09 16:26:51.059782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.059998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:31.419 [2024-11-09 16:26:51.060318] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:31.419 [2024-11-09 16:26:51.060326] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0532df08-a2cd-46ce-aed4-05877063f95a 00:17:31.419 [2024-11-09 16:26:51.060334] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:31.419 [2024-11-09 16:26:51.060342] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:31.419 [2024-11-09 16:26:51.060350] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:31.419 [2024-11-09 16:26:51.060360] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:31.419 [2024-11-09 16:26:51.060370] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:31.419 [2024-11-09 16:26:51.060379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:31.419 [2024-11-09 16:26:51.060386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:31.419 [2024-11-09 16:26:51.060392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:31.419 [2024-11-09 16:26:51.060398] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:31.419 [2024-11-09 16:26:51.060406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.419 [2024-11-09 16:26:51.060415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:31.419 [2024-11-09 16:26:51.060423] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.933 ms 00:17:31.419 [2024-11-09 16:26:51.060431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.419 [2024-11-09 16:26:51.073918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.419 [2024-11-09 16:26:51.074091] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:31.419 [2024-11-09 16:26:51.074114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.454 ms 00:17:31.419 [2024-11-09 16:26:51.074122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.420 [2024-11-09 16:26:51.074390] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.420 [2024-11-09 16:26:51.074403] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:31.420 [2024-11-09 16:26:51.074412] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:17:31.420 [2024-11-09 16:26:51.074419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.420 [2024-11-09 16:26:51.115331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.420 [2024-11-09 16:26:51.115377] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.420 [2024-11-09 16:26:51.115393] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.420 [2024-11-09 16:26:51.115402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.420 [2024-11-09 16:26:51.115493] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.420 [2024-11-09 16:26:51.115503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.420 [2024-11-09 16:26:51.115511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.420 [2024-11-09 16:26:51.115519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.420 [2024-11-09 16:26:51.115574] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.420 [2024-11-09 16:26:51.115586] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.420 [2024-11-09 16:26:51.115594] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.420 [2024-11-09 16:26:51.115607] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.420 [2024-11-09 16:26:51.115625] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.420 [2024-11-09 16:26:51.115633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.420 [2024-11-09 16:26:51.115641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.420 [2024-11-09 16:26:51.115651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.682 [2024-11-09 16:26:51.194532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.682 [2024-11-09 16:26:51.194588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.682 [2024-11-09 16:26:51.194606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.682 [2024-11-09 16:26:51.194615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.682 [2024-11-09 16:26:51.226282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.682 [2024-11-09 16:26:51.226328] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.682 [2024-11-09 16:26:51.226340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.682 [2024-11-09 16:26:51.226348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.682 [2024-11-09 16:26:51.226411] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.682 [2024-11-09 16:26:51.226421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:31.682 [2024-11-09 16:26:51.226430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.682 [2024-11-09 16:26:51.226438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.682 [2024-11-09 16:26:51.226479] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.682 [2024-11-09 16:26:51.226488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:31.682 [2024-11-09 16:26:51.226496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.682 [2024-11-09 16:26:51.226505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.682 [2024-11-09 16:26:51.226611] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.682 [2024-11-09 16:26:51.226623] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:31.682 [2024-11-09 16:26:51.226631] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.682 [2024-11-09 16:26:51.226639] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.682 [2024-11-09 16:26:51.226676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.682 [2024-11-09 16:26:51.226687] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:31.682 [2024-11-09 16:26:51.226695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.682 [2024-11-09 16:26:51.226703] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.682 [2024-11-09 16:26:51.226746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.682 [2024-11-09 16:26:51.226758] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:31.682 [2024-11-09 16:26:51.226767] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.682 [2024-11-09 16:26:51.226775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.682 [2024-11-09 16:26:51.226827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:31.682 [2024-11-09 16:26:51.226842] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:31.682 [2024-11-09 16:26:51.226851] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:31.682 [2024-11-09 16:26:51.226859] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.682 [2024-11-09 16:26:51.227013] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.011 ms, result 0 00:17:32.626 00:17:32.626 00:17:32.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:32.626 16:26:52 -- ftl/trim.sh@93 -- # svcpid=72772 00:17:32.626 16:26:52 -- ftl/trim.sh@94 -- # waitforlisten 72772 00:17:32.626 16:26:52 -- common/autotest_common.sh@829 -- # '[' -z 72772 ']' 00:17:32.626 16:26:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:32.626 16:26:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:32.626 16:26:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:32.626 16:26:52 -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:32.626 16:26:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:32.626 16:26:52 -- common/autotest_common.sh@10 -- # set +x 00:17:32.626 [2024-11-09 16:26:52.224007] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:32.626 [2024-11-09 16:26:52.225013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72772 ] 00:17:32.626 [2024-11-09 16:26:52.385398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.887 [2024-11-09 16:26:52.608256] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:32.887 [2024-11-09 16:26:52.608472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.273 16:26:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:34.273 16:26:53 -- common/autotest_common.sh@862 -- # return 0 00:17:34.273 16:26:53 -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:34.273 [2024-11-09 16:26:53.941161] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:34.273 [2024-11-09 16:26:53.941254] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:34.532 [2024-11-09 16:26:54.106331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.532 [2024-11-09 16:26:54.106476] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:34.532 [2024-11-09 16:26:54.106494] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:34.532 [2024-11-09 16:26:54.106501] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.532 [2024-11-09 16:26:54.108712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.532 [2024-11-09 16:26:54.108746] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.532 [2024-11-09 16:26:54.108757] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.193 ms 00:17:34.532 [2024-11-09 16:26:54.108763] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.532 [2024-11-09 16:26:54.108837] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:34.532 [2024-11-09 16:26:54.109468] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:34.532 [2024-11-09 16:26:54.109494] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.532 [2024-11-09 16:26:54.109501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.533 [2024-11-09 16:26:54.109509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:17:34.533 [2024-11-09 16:26:54.109514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.533 [2024-11-09 16:26:54.110511] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:34.533 [2024-11-09 16:26:54.120203] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.533 [2024-11-09 16:26:54.120240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:34.533 [2024-11-09 16:26:54.120249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.696 ms 00:17:34.533 [2024-11-09 16:26:54.120256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.533 [2024-11-09 16:26:54.120317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.533 [2024-11-09 16:26:54.120327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:34.533 [2024-11-09 16:26:54.120333] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:34.533 [2024-11-09 16:26:54.120341] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.533 [2024-11-09 16:26:54.124525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.533 [2024-11-09 16:26:54.124552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.533 [2024-11-09 16:26:54.124558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.146 ms 00:17:34.533 [2024-11-09 16:26:54.124566] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.533 [2024-11-09 16:26:54.124633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.533 [2024-11-09 16:26:54.124642] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.533 [2024-11-09 16:26:54.124648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:34.533 [2024-11-09 16:26:54.124654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.533 [2024-11-09 16:26:54.124674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.533 [2024-11-09 16:26:54.124682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:34.533 [2024-11-09 16:26:54.124688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:34.533 [2024-11-09 16:26:54.124695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.533 [2024-11-09 16:26:54.124716] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:34.533 [2024-11-09 16:26:54.127460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.533 [2024-11-09 16:26:54.127573] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.533 [2024-11-09 16:26:54.127589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.750 ms 00:17:34.533 [2024-11-09 16:26:54.127595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.533 [2024-11-09 16:26:54.127627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.533 [2024-11-09 16:26:54.127633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:34.533 [2024-11-09 16:26:54.127640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:34.533 [2024-11-09 16:26:54.127647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.533 [2024-11-09 16:26:54.127665] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:34.533 [2024-11-09 16:26:54.127678] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:34.533 [2024-11-09 16:26:54.127704] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:34.533 [2024-11-09 16:26:54.127715] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:34.533 [2024-11-09 16:26:54.127772] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:34.533 [2024-11-09 16:26:54.127780] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:34.533 [2024-11-09 16:26:54.127792] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:34.533 [2024-11-09 16:26:54.127800] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:34.533 [2024-11-09 16:26:54.127807] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:34.533 [2024-11-09 16:26:54.127813] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:34.533 [2024-11-09 16:26:54.127820] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:34.533 [2024-11-09 16:26:54.127826] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:34.533 [2024-11-09 16:26:54.127834] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:34.533 [2024-11-09 16:26:54.127840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.533 [2024-11-09 16:26:54.127846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:34.533 [2024-11-09 16:26:54.127852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.179 ms 00:17:34.533 [2024-11-09 16:26:54.127858] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.533 [2024-11-09 16:26:54.127908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.533 [2024-11-09 16:26:54.127915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:34.533 [2024-11-09 16:26:54.127921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:34.533 [2024-11-09 16:26:54.127927] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.533 [2024-11-09 16:26:54.127984] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:34.533 [2024-11-09 16:26:54.127992] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:34.533 [2024-11-09 16:26:54.127998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.533 [2024-11-09 16:26:54.128004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.533 [2024-11-09 16:26:54.128010] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:34.533 [2024-11-09 16:26:54.128016] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:34.533 [2024-11-09 16:26:54.128021] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:34.533 [2024-11-09 16:26:54.128029] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:34.533 [2024-11-09 16:26:54.128035] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:34.533 [2024-11-09 16:26:54.128041] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.533 [2024-11-09 16:26:54.128046] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:34.533 [2024-11-09 16:26:54.128052] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:34.533 [2024-11-09 16:26:54.128057] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:34.533 [2024-11-09 16:26:54.128063] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:34.533 [2024-11-09 16:26:54.128068] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:34.533 [2024-11-09 16:26:54.128075] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.533 [2024-11-09 16:26:54.128080] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:34.533 [2024-11-09 16:26:54.128086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:34.533 [2024-11-09 16:26:54.128090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.533 [2024-11-09 16:26:54.128096] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:34.533 [2024-11-09 16:26:54.128102] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:34.533 [2024-11-09 16:26:54.128108] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:34.533 [2024-11-09 16:26:54.128113] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:34.533 [2024-11-09 16:26:54.128120] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:34.533 [2024-11-09 16:26:54.128125] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:34.533 [2024-11-09 16:26:54.128136] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:34.533 [2024-11-09 16:26:54.128141] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:34.533 [2024-11-09 16:26:54.128147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:34.533 [2024-11-09 16:26:54.128151] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:34.533 [2024-11-09 16:26:54.128157] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:34.533 [2024-11-09 16:26:54.128162] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:34.533 [2024-11-09 16:26:54.128169] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:34.533 [2024-11-09 16:26:54.128174] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:34.533 [2024-11-09 16:26:54.128179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:34.533 [2024-11-09 16:26:54.128184] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:34.533 [2024-11-09 16:26:54.128190] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:34.533 [2024-11-09 16:26:54.128195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.533 [2024-11-09 16:26:54.128201] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:34.533 [2024-11-09 16:26:54.128206] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:34.533 [2024-11-09 16:26:54.128213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:34.533 [2024-11-09 16:26:54.128217] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:34.533 [2024-11-09 16:26:54.128236] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:34.533 [2024-11-09 16:26:54.128241] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:34.533 [2024-11-09 16:26:54.128248] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:34.533 [2024-11-09 16:26:54.128254] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:34.533 [2024-11-09 16:26:54.128260] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:34.533 [2024-11-09 16:26:54.128265] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:34.533 [2024-11-09 16:26:54.128273] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:34.533 [2024-11-09 16:26:54.128278] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:34.533 [2024-11-09 16:26:54.128285] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:34.533 [2024-11-09 16:26:54.128290] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:34.534 [2024-11-09 16:26:54.128298] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.534 [2024-11-09 16:26:54.128305] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:34.534 [2024-11-09 16:26:54.128311] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:34.534 [2024-11-09 16:26:54.128317] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:34.534 [2024-11-09 16:26:54.128325] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:34.534 [2024-11-09 16:26:54.128331] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:34.534 [2024-11-09 16:26:54.128338] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:34.534 [2024-11-09 16:26:54.128343] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:34.534 [2024-11-09 16:26:54.128349] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:34.534 [2024-11-09 16:26:54.128354] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:34.534 [2024-11-09 16:26:54.128361] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:34.534 [2024-11-09 16:26:54.128366] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:34.534 [2024-11-09 16:26:54.128372] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:34.534 [2024-11-09 16:26:54.128378] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:34.534 [2024-11-09 16:26:54.128384] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:34.534 [2024-11-09 16:26:54.128390] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:34.534 [2024-11-09 16:26:54.128397] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:34.534 [2024-11-09 16:26:54.128403] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:34.534 [2024-11-09 16:26:54.128410] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:34.534 [2024-11-09 16:26:54.128415] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:34.534 [2024-11-09 16:26:54.128423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.128429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:34.534 [2024-11-09 16:26:54.128435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:17:34.534 [2024-11-09 16:26:54.128441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.140182] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.140208] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.534 [2024-11-09 16:26:54.140219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.704 ms 00:17:34.534 [2024-11-09 16:26:54.140251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.140341] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.140348] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:34.534 [2024-11-09 16:26:54.140356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:34.534 [2024-11-09 16:26:54.140361] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.164267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.164293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.534 [2024-11-09 16:26:54.164303] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.889 ms 00:17:34.534 [2024-11-09 16:26:54.164309] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.164353] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.164363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.534 [2024-11-09 16:26:54.164371] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:34.534 [2024-11-09 16:26:54.164377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.164655] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.164666] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.534 [2024-11-09 16:26:54.164676] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:17:34.534 [2024-11-09 16:26:54.164681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.164770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.164777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.534 [2024-11-09 16:26:54.164785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:34.534 [2024-11-09 16:26:54.164792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.176539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.176562] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.534 [2024-11-09 16:26:54.176572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.730 ms 00:17:34.534 [2024-11-09 16:26:54.176578] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.186452] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:34.534 [2024-11-09 16:26:54.186555] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:34.534 [2024-11-09 16:26:54.186569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.186575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:34.534 [2024-11-09 16:26:54.186583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.913 ms 00:17:34.534 [2024-11-09 16:26:54.186588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.206017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.206132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:34.534 [2024-11-09 16:26:54.206147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.378 ms 00:17:34.534 [2024-11-09 16:26:54.206153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.214951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.214978] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:34.534 [2024-11-09 16:26:54.214987] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.747 ms 00:17:34.534 [2024-11-09 16:26:54.214992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.223772] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.223796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:34.534 [2024-11-09 16:26:54.223806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.739 ms 00:17:34.534 [2024-11-09 16:26:54.223811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.224077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.224088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:34.534 [2024-11-09 16:26:54.224098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:17:34.534 [2024-11-09 16:26:54.224103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.268958] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.269074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:34.534 [2024-11-09 16:26:54.269094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.837 ms 00:17:34.534 [2024-11-09 16:26:54.269101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.277077] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:34.534 [2024-11-09 16:26:54.288345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.288380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:34.534 [2024-11-09 16:26:54.288390] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.188 ms 00:17:34.534 [2024-11-09 16:26:54.288397] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.288447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.288458] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:34.534 [2024-11-09 16:26:54.288466] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:34.534 [2024-11-09 16:26:54.288472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.288508] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.288516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:34.534 [2024-11-09 16:26:54.288523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:34.534 [2024-11-09 16:26:54.288529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.289445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.289470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:34.534 [2024-11-09 16:26:54.289477] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:17:34.534 [2024-11-09 16:26:54.289485] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.289509] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.289516] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:34.534 [2024-11-09 16:26:54.289522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:34.534 [2024-11-09 16:26:54.289529] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.534 [2024-11-09 16:26:54.289556] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:34.534 [2024-11-09 16:26:54.289565] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.534 [2024-11-09 16:26:54.289571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:34.534 [2024-11-09 16:26:54.289578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:34.534 [2024-11-09 16:26:54.289583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.793 [2024-11-09 16:26:54.308337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.793 [2024-11-09 16:26:54.308364] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:34.793 [2024-11-09 16:26:54.308375] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.733 ms 00:17:34.793 [2024-11-09 16:26:54.308381] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.793 [2024-11-09 16:26:54.308446] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.793 [2024-11-09 16:26:54.308455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:34.793 [2024-11-09 16:26:54.308463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:34.793 [2024-11-09 16:26:54.308470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.793 [2024-11-09 16:26:54.309051] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:34.793 [2024-11-09 16:26:54.311479] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 202.510 ms, result 0 00:17:34.793 [2024-11-09 16:26:54.313086] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:34.793 Some configs were skipped because the RPC state that can call them passed over. 00:17:34.793 16:26:54 -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:34.793 [2024-11-09 16:26:54.542941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.793 [2024-11-09 16:26:54.543048] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:34.793 [2024-11-09 16:26:54.543091] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.046 ms 00:17:34.793 [2024-11-09 16:26:54.543113] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.793 [2024-11-09 16:26:54.543153] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.256 ms, result 0 00:17:34.793 true 00:17:34.793 16:26:54 -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:35.052 [2024-11-09 16:26:54.742142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.052 [2024-11-09 16:26:54.742258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Process unmap 00:17:35.052 [2024-11-09 16:26:54.742302] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.919 ms 00:17:35.052 [2024-11-09 16:26:54.742320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.052 [2024-11-09 16:26:54.742361] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL unmap', duration = 19.137 ms, result 0 00:17:35.052 true 00:17:35.052 16:26:54 -- ftl/trim.sh@102 -- # killprocess 72772 00:17:35.052 16:26:54 -- common/autotest_common.sh@936 -- # '[' -z 72772 ']' 00:17:35.052 16:26:54 -- common/autotest_common.sh@940 -- # kill -0 72772 00:17:35.052 16:26:54 -- common/autotest_common.sh@941 -- # uname 00:17:35.052 16:26:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:35.052 16:26:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72772 00:17:35.052 killing process with pid 72772 00:17:35.052 16:26:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:35.052 16:26:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:35.052 16:26:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72772' 00:17:35.052 16:26:54 -- common/autotest_common.sh@955 -- # kill 72772 00:17:35.052 16:26:54 -- common/autotest_common.sh@960 -- # wait 72772 00:17:35.620 [2024-11-09 16:26:55.310499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.620 [2024-11-09 16:26:55.310539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:35.620 [2024-11-09 16:26:55.310549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:35.620 [2024-11-09 16:26:55.310558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.620 [2024-11-09 16:26:55.310576] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:35.620 [2024-11-09 16:26:55.312671] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.620 [2024-11-09 16:26:55.312693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:35.620 [2024-11-09 16:26:55.312704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.083 ms 00:17:35.620 [2024-11-09 16:26:55.312710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.620 [2024-11-09 16:26:55.312919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.620 [2024-11-09 16:26:55.312927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:35.620 [2024-11-09 16:26:55.312934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:17:35.620 [2024-11-09 16:26:55.312940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.620 [2024-11-09 16:26:55.315969] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.620 [2024-11-09 16:26:55.315994] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:35.620 [2024-11-09 16:26:55.316003] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.013 ms 00:17:35.620 [2024-11-09 16:26:55.316008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.620 [2024-11-09 16:26:55.321324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.620 [2024-11-09 16:26:55.321433] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:35.620 [2024-11-09 16:26:55.321450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.289 ms 00:17:35.620 [2024-11-09 16:26:55.321456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.620 [2024-11-09 16:26:55.328805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.620 [2024-11-09 16:26:55.328829] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:35.620 [2024-11-09 16:26:55.328839] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.293 ms 00:17:35.620 [2024-11-09 16:26:55.328845] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.620 [2024-11-09 16:26:55.335430] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.620 [2024-11-09 16:26:55.335521] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:35.620 [2024-11-09 16:26:55.335534] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.556 ms 00:17:35.620 [2024-11-09 16:26:55.335540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.620 [2024-11-09 16:26:55.335641] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.620 [2024-11-09 16:26:55.335649] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:35.620 [2024-11-09 16:26:55.335656] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:35.620 [2024-11-09 16:26:55.335661] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.620 [2024-11-09 16:26:55.343701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.620 [2024-11-09 16:26:55.343724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:35.620 [2024-11-09 16:26:55.343732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.023 ms 00:17:35.620 [2024-11-09 16:26:55.343738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.620 [2024-11-09 16:26:55.351102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.620 [2024-11-09 16:26:55.351187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:35.620 [2024-11-09 16:26:55.351205] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.334 ms 00:17:35.620 [2024-11-09 16:26:55.351210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.620 [2024-11-09 16:26:55.358719] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.620 [2024-11-09 16:26:55.358742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:35.621 [2024-11-09 16:26:55.358751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.470 ms 00:17:35.621 [2024-11-09 16:26:55.358756] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.621 [2024-11-09 16:26:55.365796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.621 [2024-11-09 16:26:55.365882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:35.621 [2024-11-09 16:26:55.365895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.991 ms 00:17:35.621 [2024-11-09 16:26:55.365900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.621 [2024-11-09 16:26:55.365931] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:35.621 [2024-11-09 16:26:55.365944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.365952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.365959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.365966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.365972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.365980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.365986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.365993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.365998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:35.621 [2024-11-09 16:26:55.366491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:35.622 [2024-11-09 16:26:55.366609] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:35.622 [2024-11-09 16:26:55.366617] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0532df08-a2cd-46ce-aed4-05877063f95a 00:17:35.622 [2024-11-09 16:26:55.366623] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:35.622 [2024-11-09 16:26:55.366630] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:35.622 [2024-11-09 16:26:55.366635] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:35.622 [2024-11-09 16:26:55.366642] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:35.622 [2024-11-09 16:26:55.366647] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:35.622 [2024-11-09 16:26:55.366654] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:35.622 [2024-11-09 16:26:55.366660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:35.622 [2024-11-09 16:26:55.366665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:35.622 [2024-11-09 16:26:55.366670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:35.622 [2024-11-09 16:26:55.366676] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.622 [2024-11-09 16:26:55.366682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:35.622 [2024-11-09 16:26:55.366691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:17:35.622 [2024-11-09 16:26:55.366696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.622 [2024-11-09 16:26:55.376420] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.622 [2024-11-09 16:26:55.376507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:35.622 [2024-11-09 16:26:55.376521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.707 ms 00:17:35.622 [2024-11-09 16:26:55.376527] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.622 [2024-11-09 16:26:55.376688] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:35.622 [2024-11-09 16:26:55.376701] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:35.622 [2024-11-09 16:26:55.376709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:17:35.622 [2024-11-09 16:26:55.376715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.411588] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.881 [2024-11-09 16:26:55.411672] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:35.881 [2024-11-09 16:26:55.411685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.881 [2024-11-09 16:26:55.411692] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.411753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.881 [2024-11-09 16:26:55.411761] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:35.881 [2024-11-09 16:26:55.411769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.881 [2024-11-09 16:26:55.411774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.411806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.881 [2024-11-09 16:26:55.411813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:35.881 [2024-11-09 16:26:55.411821] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.881 [2024-11-09 16:26:55.411827] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.411841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.881 [2024-11-09 16:26:55.411846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:35.881 [2024-11-09 16:26:55.411855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.881 [2024-11-09 16:26:55.411861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.472217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.881 [2024-11-09 16:26:55.472259] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:35.881 [2024-11-09 16:26:55.472269] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.881 [2024-11-09 16:26:55.472276] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.494816] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.881 [2024-11-09 16:26:55.494930] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:35.881 [2024-11-09 16:26:55.494944] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.881 [2024-11-09 16:26:55.494950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.494992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.881 [2024-11-09 16:26:55.494999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:35.881 [2024-11-09 16:26:55.495008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.881 [2024-11-09 16:26:55.495013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.495036] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.881 [2024-11-09 16:26:55.495043] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:35.881 [2024-11-09 16:26:55.495049] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.881 [2024-11-09 16:26:55.495056] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.495125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.881 [2024-11-09 16:26:55.495132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:35.881 [2024-11-09 16:26:55.495140] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.881 [2024-11-09 16:26:55.495145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.495170] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.881 [2024-11-09 16:26:55.495177] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:35.881 [2024-11-09 16:26:55.495184] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.881 [2024-11-09 16:26:55.495189] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.495219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.881 [2024-11-09 16:26:55.495244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:35.881 [2024-11-09 16:26:55.495254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.881 [2024-11-09 16:26:55.495260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.495294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:35.881 [2024-11-09 16:26:55.495301] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:35.881 [2024-11-09 16:26:55.495308] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:35.881 [2024-11-09 16:26:55.495315] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:35.881 [2024-11-09 16:26:55.495418] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 184.903 ms, result 0 00:17:36.450 16:26:56 -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:36.450 [2024-11-09 16:26:56.179203] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:36.450 [2024-11-09 16:26:56.179337] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72828 ] 00:17:36.709 [2024-11-09 16:26:56.327521] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.709 [2024-11-09 16:26:56.477834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.967 [2024-11-09 16:26:56.680763] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:36.967 [2024-11-09 16:26:56.680812] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:37.227 [2024-11-09 16:26:56.823941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.227 [2024-11-09 16:26:56.824072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:37.227 [2024-11-09 16:26:56.824089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:37.227 [2024-11-09 16:26:56.824096] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.227 [2024-11-09 16:26:56.826131] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.227 [2024-11-09 16:26:56.826163] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:37.227 [2024-11-09 16:26:56.826170] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.018 ms 00:17:37.227 [2024-11-09 16:26:56.826176] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.227 [2024-11-09 16:26:56.826239] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:37.227 [2024-11-09 16:26:56.826745] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:37.227 [2024-11-09 16:26:56.826764] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.227 [2024-11-09 16:26:56.826771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:37.227 [2024-11-09 16:26:56.826777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:17:37.227 [2024-11-09 16:26:56.826783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.227 [2024-11-09 16:26:56.827721] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:37.227 [2024-11-09 16:26:56.837510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.227 [2024-11-09 16:26:56.837535] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:37.227 [2024-11-09 16:26:56.837543] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.790 ms 00:17:37.227 [2024-11-09 16:26:56.837549] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.227 [2024-11-09 16:26:56.837605] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.227 [2024-11-09 16:26:56.837614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:37.227 [2024-11-09 16:26:56.837620] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:37.227 [2024-11-09 16:26:56.837626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.227 [2024-11-09 16:26:56.841823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.227 [2024-11-09 16:26:56.841845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:37.227 [2024-11-09 16:26:56.841852] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.168 ms 00:17:37.227 [2024-11-09 16:26:56.841861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.227 [2024-11-09 16:26:56.841942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.227 [2024-11-09 16:26:56.841950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:37.227 [2024-11-09 16:26:56.841956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:37.227 [2024-11-09 16:26:56.841961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.227 [2024-11-09 16:26:56.841977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.227 [2024-11-09 16:26:56.841983] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:37.227 [2024-11-09 16:26:56.841989] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:37.227 [2024-11-09 16:26:56.841995] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.227 [2024-11-09 16:26:56.842018] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:37.227 [2024-11-09 16:26:56.844749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.227 [2024-11-09 16:26:56.844770] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:37.227 [2024-11-09 16:26:56.844777] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.740 ms 00:17:37.227 [2024-11-09 16:26:56.844784] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.227 [2024-11-09 16:26:56.844811] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.227 [2024-11-09 16:26:56.844818] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:37.227 [2024-11-09 16:26:56.844824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:37.227 [2024-11-09 16:26:56.844829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.227 [2024-11-09 16:26:56.844842] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:37.227 [2024-11-09 16:26:56.844855] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:17:37.227 [2024-11-09 16:26:56.844880] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:37.227 [2024-11-09 16:26:56.844892] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:17:37.227 [2024-11-09 16:26:56.844947] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:17:37.228 [2024-11-09 16:26:56.844954] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:37.228 [2024-11-09 16:26:56.844962] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:17:37.228 [2024-11-09 16:26:56.844969] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:37.228 [2024-11-09 16:26:56.844976] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:37.228 [2024-11-09 16:26:56.844981] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:37.228 [2024-11-09 16:26:56.844986] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:37.228 [2024-11-09 16:26:56.844992] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:17:37.228 [2024-11-09 16:26:56.844999] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:17:37.228 [2024-11-09 16:26:56.845005] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.228 [2024-11-09 16:26:56.845010] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:37.228 [2024-11-09 16:26:56.845016] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:17:37.228 [2024-11-09 16:26:56.845021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.228 [2024-11-09 16:26:56.845070] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.228 [2024-11-09 16:26:56.845076] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:37.228 [2024-11-09 16:26:56.845082] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:37.228 [2024-11-09 16:26:56.845087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.228 [2024-11-09 16:26:56.845142] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:37.228 [2024-11-09 16:26:56.845157] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:37.228 [2024-11-09 16:26:56.845163] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:37.228 [2024-11-09 16:26:56.845169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.228 [2024-11-09 16:26:56.845174] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:37.228 [2024-11-09 16:26:56.845179] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:37.228 [2024-11-09 16:26:56.845185] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:37.228 [2024-11-09 16:26:56.845190] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:37.228 [2024-11-09 16:26:56.845196] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:37.228 [2024-11-09 16:26:56.845201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:37.228 [2024-11-09 16:26:56.845206] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:37.228 [2024-11-09 16:26:56.845211] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:37.228 [2024-11-09 16:26:56.845215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:37.228 [2024-11-09 16:26:56.845234] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:37.228 [2024-11-09 16:26:56.845245] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.62 MiB 00:17:37.228 [2024-11-09 16:26:56.845250] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.228 [2024-11-09 16:26:56.845256] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:37.228 [2024-11-09 16:26:56.845261] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.75 MiB 00:17:37.228 [2024-11-09 16:26:56.845266] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.228 [2024-11-09 16:26:56.845271] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:17:37.228 [2024-11-09 16:26:56.845276] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.88 MiB 00:17:37.228 [2024-11-09 16:26:56.845281] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:17:37.228 [2024-11-09 16:26:56.845287] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:37.228 [2024-11-09 16:26:56.845292] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:37.228 [2024-11-09 16:26:56.845297] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:37.228 [2024-11-09 16:26:56.845302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:37.228 [2024-11-09 16:26:56.845306] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 95.12 MiB 00:17:37.228 [2024-11-09 16:26:56.845312] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:37.228 [2024-11-09 16:26:56.845317] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:37.228 [2024-11-09 16:26:56.845322] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:37.228 [2024-11-09 16:26:56.845327] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:37.228 [2024-11-09 16:26:56.845332] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:37.228 [2024-11-09 16:26:56.845336] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 103.12 MiB 00:17:37.228 [2024-11-09 16:26:56.845341] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:17:37.228 [2024-11-09 16:26:56.845346] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:37.228 [2024-11-09 16:26:56.845351] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:37.228 [2024-11-09 16:26:56.845356] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:37.228 [2024-11-09 16:26:56.845361] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:37.228 [2024-11-09 16:26:56.845366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.38 MiB 00:17:37.228 [2024-11-09 16:26:56.845371] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:37.228 [2024-11-09 16:26:56.845375] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:37.228 [2024-11-09 16:26:56.845381] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:37.228 [2024-11-09 16:26:56.845386] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:37.228 [2024-11-09 16:26:56.845393] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:37.228 [2024-11-09 16:26:56.845399] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:37.228 [2024-11-09 16:26:56.845405] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:37.228 [2024-11-09 16:26:56.845411] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:37.228 [2024-11-09 16:26:56.845416] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:37.228 [2024-11-09 16:26:56.845421] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:37.228 [2024-11-09 16:26:56.845425] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:37.228 [2024-11-09 16:26:56.845437] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:37.228 [2024-11-09 16:26:56.845444] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:37.228 [2024-11-09 16:26:56.845451] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:37.228 [2024-11-09 16:26:56.845457] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5a20 blk_sz:0x80 00:17:37.228 [2024-11-09 16:26:56.845462] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x5aa0 blk_sz:0x80 00:17:37.228 [2024-11-09 16:26:56.845468] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5b20 blk_sz:0x400 00:17:37.228 [2024-11-09 16:26:56.845473] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5f20 blk_sz:0x400 00:17:37.228 [2024-11-09 16:26:56.845478] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x6320 blk_sz:0x400 00:17:37.228 [2024-11-09 16:26:56.845483] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x6720 blk_sz:0x400 00:17:37.228 [2024-11-09 16:26:56.845489] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6b20 blk_sz:0x40 00:17:37.228 [2024-11-09 16:26:56.845494] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6b60 blk_sz:0x40 00:17:37.228 [2024-11-09 16:26:56.845499] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x6ba0 blk_sz:0x20 00:17:37.228 [2024-11-09 16:26:56.845504] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x6bc0 blk_sz:0x20 00:17:37.228 [2024-11-09 16:26:56.845510] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x6be0 blk_sz:0x100000 00:17:37.228 [2024-11-09 16:26:56.845515] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x106be0 blk_sz:0x3c720 00:17:37.228 [2024-11-09 16:26:56.845521] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:37.228 [2024-11-09 16:26:56.845530] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:37.228 [2024-11-09 16:26:56.845536] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:37.228 [2024-11-09 16:26:56.845541] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:37.228 [2024-11-09 16:26:56.845547] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:37.228 [2024-11-09 16:26:56.845553] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:37.228 [2024-11-09 16:26:56.845558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.228 [2024-11-09 16:26:56.845564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:37.228 [2024-11-09 16:26:56.845570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:17:37.228 [2024-11-09 16:26:56.845575] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.228 [2024-11-09 16:26:56.857342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.228 [2024-11-09 16:26:56.857442] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:37.228 [2024-11-09 16:26:56.857453] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.735 ms 00:17:37.228 [2024-11-09 16:26:56.857459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.228 [2024-11-09 16:26:56.857545] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.228 [2024-11-09 16:26:56.857552] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:37.228 [2024-11-09 16:26:56.857558] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:37.228 [2024-11-09 16:26:56.857564] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.228 [2024-11-09 16:26:56.892576] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.229 [2024-11-09 16:26:56.892607] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:37.229 [2024-11-09 16:26:56.892617] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.996 ms 00:17:37.229 [2024-11-09 16:26:56.892624] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.229 [2024-11-09 16:26:56.892679] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.229 [2024-11-09 16:26:56.892688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:37.229 [2024-11-09 16:26:56.892699] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:37.229 [2024-11-09 16:26:56.892705] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.229 [2024-11-09 16:26:56.892977] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.229 [2024-11-09 16:26:56.892988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:37.229 [2024-11-09 16:26:56.892995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:17:37.229 [2024-11-09 16:26:56.893000] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.229 [2024-11-09 16:26:56.893092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.229 [2024-11-09 16:26:56.893100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:37.229 [2024-11-09 16:26:56.893105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:37.229 [2024-11-09 16:26:56.893110] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.229 [2024-11-09 16:26:56.904360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.229 [2024-11-09 16:26:56.904468] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:37.229 [2024-11-09 16:26:56.904480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.233 ms 00:17:37.229 [2024-11-09 16:26:56.904489] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.229 [2024-11-09 16:26:56.914716] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:37.229 [2024-11-09 16:26:56.914743] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:37.229 [2024-11-09 16:26:56.914752] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.229 [2024-11-09 16:26:56.914759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:37.229 [2024-11-09 16:26:56.914765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.188 ms 00:17:37.229 [2024-11-09 16:26:56.914771] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.229 [2024-11-09 16:26:56.933553] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.229 [2024-11-09 16:26:56.933583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:37.229 [2024-11-09 16:26:56.933592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.729 ms 00:17:37.229 [2024-11-09 16:26:56.933598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.229 [2024-11-09 16:26:56.942960] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.229 [2024-11-09 16:26:56.942984] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:37.229 [2024-11-09 16:26:56.942997] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.311 ms 00:17:37.229 [2024-11-09 16:26:56.943002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.229 [2024-11-09 16:26:56.952048] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.229 [2024-11-09 16:26:56.952072] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:37.229 [2024-11-09 16:26:56.952079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.007 ms 00:17:37.229 [2024-11-09 16:26:56.952084] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.229 [2024-11-09 16:26:56.952357] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.229 [2024-11-09 16:26:56.952370] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:37.229 [2024-11-09 16:26:56.952377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:17:37.229 [2024-11-09 16:26:56.952385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.490 [2024-11-09 16:26:56.997826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.490 [2024-11-09 16:26:56.997861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:37.490 [2024-11-09 16:26:56.997871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.424 ms 00:17:37.490 [2024-11-09 16:26:56.997881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.490 [2024-11-09 16:26:57.005764] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:37.490 [2024-11-09 16:26:57.016942] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.490 [2024-11-09 16:26:57.016971] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:37.490 [2024-11-09 16:26:57.016981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.994 ms 00:17:37.490 [2024-11-09 16:26:57.016988] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.490 [2024-11-09 16:26:57.017041] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.490 [2024-11-09 16:26:57.017049] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:37.490 [2024-11-09 16:26:57.017057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:37.490 [2024-11-09 16:26:57.017063] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.490 [2024-11-09 16:26:57.017100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.490 [2024-11-09 16:26:57.017106] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:37.490 [2024-11-09 16:26:57.017112] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:17:37.490 [2024-11-09 16:26:57.017118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.490 [2024-11-09 16:26:57.018062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.490 [2024-11-09 16:26:57.018086] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:17:37.490 [2024-11-09 16:26:57.018093] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.927 ms 00:17:37.490 [2024-11-09 16:26:57.018098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.490 [2024-11-09 16:26:57.018123] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.490 [2024-11-09 16:26:57.018133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:37.490 [2024-11-09 16:26:57.018139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:37.490 [2024-11-09 16:26:57.018145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.490 [2024-11-09 16:26:57.018171] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:37.490 [2024-11-09 16:26:57.018178] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.490 [2024-11-09 16:26:57.018184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:37.490 [2024-11-09 16:26:57.018190] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:37.490 [2024-11-09 16:26:57.018196] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.490 [2024-11-09 16:26:57.036913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.490 [2024-11-09 16:26:57.036939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:37.490 [2024-11-09 16:26:57.036947] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.702 ms 00:17:37.490 [2024-11-09 16:26:57.036953] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.490 [2024-11-09 16:26:57.037017] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.490 [2024-11-09 16:26:57.037025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:37.490 [2024-11-09 16:26:57.037032] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:17:37.490 [2024-11-09 16:26:57.037037] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.490 [2024-11-09 16:26:57.037645] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:37.490 [2024-11-09 16:26:57.040037] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 213.489 ms, result 0 00:17:37.490 [2024-11-09 16:26:57.040899] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:37.490 [2024-11-09 16:26:57.056111] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:38.431  [2024-11-09T16:26:59.145Z] Copying: 20/256 [MB] (20 MBps) [2024-11-09T16:27:00.090Z] Copying: 39/256 [MB] (18 MBps) [2024-11-09T16:27:01.501Z] Copying: 57/256 [MB] (17 MBps) [2024-11-09T16:27:02.447Z] Copying: 78/256 [MB] (20 MBps) [2024-11-09T16:27:03.393Z] Copying: 96/256 [MB] (18 MBps) [2024-11-09T16:27:04.337Z] Copying: 108/256 [MB] (11 MBps) [2024-11-09T16:27:05.281Z] Copying: 118/256 [MB] (10 MBps) [2024-11-09T16:27:06.226Z] Copying: 130/256 [MB] (12 MBps) [2024-11-09T16:27:07.255Z] Copying: 141/256 [MB] (11 MBps) [2024-11-09T16:27:08.201Z] Copying: 152/256 [MB] (10 MBps) [2024-11-09T16:27:09.147Z] Copying: 163/256 [MB] (10 MBps) [2024-11-09T16:27:10.090Z] Copying: 176/256 [MB] (12 MBps) [2024-11-09T16:27:11.475Z] Copying: 187/256 [MB] (11 MBps) [2024-11-09T16:27:12.420Z] Copying: 199/256 [MB] (11 MBps) [2024-11-09T16:27:13.367Z] Copying: 212/256 [MB] (12 MBps) [2024-11-09T16:27:14.313Z] Copying: 223/256 [MB] (11 MBps) [2024-11-09T16:27:15.262Z] Copying: 235/256 [MB] (11 MBps) [2024-11-09T16:27:16.206Z] Copying: 245/256 [MB] (10 MBps) [2024-11-09T16:27:16.206Z] Copying: 255/256 [MB] (10 MBps) [2024-11-09T16:27:16.466Z] Copying: 256/256 [MB] (average 13 MBps)[2024-11-09 16:27:16.450497] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:56.696 [2024-11-09 16:27:16.462901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.696 [2024-11-09 16:27:16.462965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:56.697 [2024-11-09 16:27:16.462981] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:56.697 [2024-11-09 16:27:16.462990] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.697 [2024-11-09 16:27:16.463020] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:56.697 [2024-11-09 16:27:16.465941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.960 [2024-11-09 16:27:16.466149] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:56.960 [2024-11-09 16:27:16.466172] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:17:56.960 [2024-11-09 16:27:16.466180] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.960 [2024-11-09 16:27:16.466510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.960 [2024-11-09 16:27:16.466523] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:56.960 [2024-11-09 16:27:16.466533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:17:56.960 [2024-11-09 16:27:16.466545] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.960 [2024-11-09 16:27:16.472218] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.960 [2024-11-09 16:27:16.472275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:56.960 [2024-11-09 16:27:16.472287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.652 ms 00:17:56.960 [2024-11-09 16:27:16.472297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.960 [2024-11-09 16:27:16.479206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.960 [2024-11-09 16:27:16.479255] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:17:56.960 [2024-11-09 16:27:16.479268] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.862 ms 00:17:56.960 [2024-11-09 16:27:16.479277] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.960 [2024-11-09 16:27:16.508047] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.960 [2024-11-09 16:27:16.508100] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:56.960 [2024-11-09 16:27:16.508114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.678 ms 00:17:56.960 [2024-11-09 16:27:16.508122] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.960 [2024-11-09 16:27:16.525436] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.960 [2024-11-09 16:27:16.525624] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:56.960 [2024-11-09 16:27:16.525647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.225 ms 00:17:56.960 [2024-11-09 16:27:16.525656] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.960 [2024-11-09 16:27:16.525947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.960 [2024-11-09 16:27:16.525976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:56.960 [2024-11-09 16:27:16.525988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:17:56.960 [2024-11-09 16:27:16.525996] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.960 [2024-11-09 16:27:16.551895] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.961 [2024-11-09 16:27:16.551939] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:17:56.961 [2024-11-09 16:27:16.551951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.880 ms 00:17:56.961 [2024-11-09 16:27:16.551959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.961 [2024-11-09 16:27:16.577735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.961 [2024-11-09 16:27:16.577780] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:17:56.961 [2024-11-09 16:27:16.577793] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.706 ms 00:17:56.961 [2024-11-09 16:27:16.577800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.961 [2024-11-09 16:27:16.603145] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.961 [2024-11-09 16:27:16.603192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:56.961 [2024-11-09 16:27:16.603206] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.276 ms 00:17:56.961 [2024-11-09 16:27:16.603213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.961 [2024-11-09 16:27:16.628753] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.961 [2024-11-09 16:27:16.628801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:56.961 [2024-11-09 16:27:16.628814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.417 ms 00:17:56.961 [2024-11-09 16:27:16.628820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.961 [2024-11-09 16:27:16.628888] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:56.961 [2024-11-09 16:27:16.628906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.628917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.628925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.628933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.628942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.628950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.628957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.628965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.628973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.628981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.628988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.628995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:56.961 [2024-11-09 16:27:16.629566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:56.962 [2024-11-09 16:27:16.629774] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:56.962 [2024-11-09 16:27:16.629782] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0532df08-a2cd-46ce-aed4-05877063f95a 00:17:56.962 [2024-11-09 16:27:16.629791] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:56.962 [2024-11-09 16:27:16.629799] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:56.962 [2024-11-09 16:27:16.629807] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:56.962 [2024-11-09 16:27:16.629816] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:56.962 [2024-11-09 16:27:16.629823] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:56.962 [2024-11-09 16:27:16.629836] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:56.962 [2024-11-09 16:27:16.629844] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:56.962 [2024-11-09 16:27:16.629857] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:56.962 [2024-11-09 16:27:16.629865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:56.962 [2024-11-09 16:27:16.629874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.962 [2024-11-09 16:27:16.629882] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:56.962 [2024-11-09 16:27:16.629891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.987 ms 00:17:56.962 [2024-11-09 16:27:16.629899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.962 [2024-11-09 16:27:16.643908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.962 [2024-11-09 16:27:16.643951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:56.962 [2024-11-09 16:27:16.643971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.971 ms 00:17:56.962 [2024-11-09 16:27:16.643979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.962 [2024-11-09 16:27:16.644244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.962 [2024-11-09 16:27:16.644261] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:56.962 [2024-11-09 16:27:16.644271] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.210 ms 00:17:56.962 [2024-11-09 16:27:16.644279] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.962 [2024-11-09 16:27:16.686007] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.962 [2024-11-09 16:27:16.686062] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.962 [2024-11-09 16:27:16.686081] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.962 [2024-11-09 16:27:16.686089] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.962 [2024-11-09 16:27:16.686190] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.962 [2024-11-09 16:27:16.686200] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.962 [2024-11-09 16:27:16.686209] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.962 [2024-11-09 16:27:16.686217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.962 [2024-11-09 16:27:16.686295] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.962 [2024-11-09 16:27:16.686305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.962 [2024-11-09 16:27:16.686313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.962 [2024-11-09 16:27:16.686326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.962 [2024-11-09 16:27:16.686363] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.962 [2024-11-09 16:27:16.686372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.962 [2024-11-09 16:27:16.686381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.962 [2024-11-09 16:27:16.686389] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.223 [2024-11-09 16:27:16.767995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.223 [2024-11-09 16:27:16.768052] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.223 [2024-11-09 16:27:16.768072] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.223 [2024-11-09 16:27:16.768080] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.223 [2024-11-09 16:27:16.800209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.223 [2024-11-09 16:27:16.800275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.223 [2024-11-09 16:27:16.800287] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.223 [2024-11-09 16:27:16.800297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.223 [2024-11-09 16:27:16.800361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.223 [2024-11-09 16:27:16.800371] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.223 [2024-11-09 16:27:16.800380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.223 [2024-11-09 16:27:16.800388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.223 [2024-11-09 16:27:16.800428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.223 [2024-11-09 16:27:16.800439] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.224 [2024-11-09 16:27:16.800448] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.224 [2024-11-09 16:27:16.800456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.224 [2024-11-09 16:27:16.800562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.224 [2024-11-09 16:27:16.800572] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.224 [2024-11-09 16:27:16.800581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.224 [2024-11-09 16:27:16.800589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.224 [2024-11-09 16:27:16.800627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.224 [2024-11-09 16:27:16.800636] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:57.224 [2024-11-09 16:27:16.800645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.224 [2024-11-09 16:27:16.800654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.224 [2024-11-09 16:27:16.800702] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.224 [2024-11-09 16:27:16.800711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.224 [2024-11-09 16:27:16.800719] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.224 [2024-11-09 16:27:16.800727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.224 [2024-11-09 16:27:16.800781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.224 [2024-11-09 16:27:16.800795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.224 [2024-11-09 16:27:16.800804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.224 [2024-11-09 16:27:16.800812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.224 [2024-11-09 16:27:16.800981] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.093 ms, result 0 00:17:58.169 00:17:58.169 00:17:58.169 16:27:17 -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:58.742 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:17:58.743 16:27:18 -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:17:58.743 16:27:18 -- ftl/trim.sh@109 -- # fio_kill 00:17:58.743 16:27:18 -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:17:58.743 16:27:18 -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:58.743 16:27:18 -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:17:58.743 16:27:18 -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:58.743 Process with pid 72772 is not found 00:17:58.743 16:27:18 -- ftl/trim.sh@20 -- # killprocess 72772 00:17:58.743 16:27:18 -- common/autotest_common.sh@936 -- # '[' -z 72772 ']' 00:17:58.743 16:27:18 -- common/autotest_common.sh@940 -- # kill -0 72772 00:17:58.743 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (72772) - No such process 00:17:58.743 16:27:18 -- common/autotest_common.sh@963 -- # echo 'Process with pid 72772 is not found' 00:17:58.743 ************************************ 00:17:58.743 END TEST ftl_trim 00:17:58.743 ************************************ 00:17:58.743 00:17:58.743 real 1m25.804s 00:17:58.743 user 1m40.806s 00:17:58.743 sys 0m16.033s 00:17:58.743 16:27:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:58.743 16:27:18 -- common/autotest_common.sh@10 -- # set +x 00:17:58.743 16:27:18 -- ftl/ftl.sh@77 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:58.743 16:27:18 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:17:58.743 16:27:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:58.743 16:27:18 -- common/autotest_common.sh@10 -- # set +x 00:17:58.743 ************************************ 00:17:58.743 START TEST ftl_restore 00:17:58.743 ************************************ 00:17:58.743 16:27:18 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:06.0 0000:00:07.0 00:17:58.743 * Looking for test storage... 00:17:58.743 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:58.743 16:27:18 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:58.743 16:27:18 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:58.743 16:27:18 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:59.005 16:27:18 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:59.005 16:27:18 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:59.005 16:27:18 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:59.005 16:27:18 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:59.005 16:27:18 -- scripts/common.sh@335 -- # IFS=.-: 00:17:59.005 16:27:18 -- scripts/common.sh@335 -- # read -ra ver1 00:17:59.005 16:27:18 -- scripts/common.sh@336 -- # IFS=.-: 00:17:59.005 16:27:18 -- scripts/common.sh@336 -- # read -ra ver2 00:17:59.005 16:27:18 -- scripts/common.sh@337 -- # local 'op=<' 00:17:59.005 16:27:18 -- scripts/common.sh@339 -- # ver1_l=2 00:17:59.005 16:27:18 -- scripts/common.sh@340 -- # ver2_l=1 00:17:59.005 16:27:18 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:59.005 16:27:18 -- scripts/common.sh@343 -- # case "$op" in 00:17:59.005 16:27:18 -- scripts/common.sh@344 -- # : 1 00:17:59.005 16:27:18 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:59.005 16:27:18 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:59.005 16:27:18 -- scripts/common.sh@364 -- # decimal 1 00:17:59.005 16:27:18 -- scripts/common.sh@352 -- # local d=1 00:17:59.005 16:27:18 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:59.005 16:27:18 -- scripts/common.sh@354 -- # echo 1 00:17:59.005 16:27:18 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:59.005 16:27:18 -- scripts/common.sh@365 -- # decimal 2 00:17:59.005 16:27:18 -- scripts/common.sh@352 -- # local d=2 00:17:59.005 16:27:18 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:59.005 16:27:18 -- scripts/common.sh@354 -- # echo 2 00:17:59.005 16:27:18 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:59.005 16:27:18 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:59.005 16:27:18 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:59.005 16:27:18 -- scripts/common.sh@367 -- # return 0 00:17:59.005 16:27:18 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:59.005 16:27:18 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.005 --rc genhtml_branch_coverage=1 00:17:59.005 --rc genhtml_function_coverage=1 00:17:59.005 --rc genhtml_legend=1 00:17:59.005 --rc geninfo_all_blocks=1 00:17:59.005 --rc geninfo_unexecuted_blocks=1 00:17:59.005 00:17:59.005 ' 00:17:59.005 16:27:18 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.005 --rc genhtml_branch_coverage=1 00:17:59.005 --rc genhtml_function_coverage=1 00:17:59.005 --rc genhtml_legend=1 00:17:59.005 --rc geninfo_all_blocks=1 00:17:59.005 --rc geninfo_unexecuted_blocks=1 00:17:59.005 00:17:59.005 ' 00:17:59.005 16:27:18 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.005 --rc genhtml_branch_coverage=1 00:17:59.005 --rc genhtml_function_coverage=1 00:17:59.005 --rc genhtml_legend=1 00:17:59.005 --rc geninfo_all_blocks=1 00:17:59.005 --rc geninfo_unexecuted_blocks=1 00:17:59.005 00:17:59.005 ' 00:17:59.005 16:27:18 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:59.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:59.005 --rc genhtml_branch_coverage=1 00:17:59.005 --rc genhtml_function_coverage=1 00:17:59.005 --rc genhtml_legend=1 00:17:59.005 --rc geninfo_all_blocks=1 00:17:59.005 --rc geninfo_unexecuted_blocks=1 00:17:59.005 00:17:59.005 ' 00:17:59.005 16:27:18 -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:59.005 16:27:18 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:17:59.005 16:27:18 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.005 16:27:18 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:59.005 16:27:18 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:59.005 16:27:18 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:59.005 16:27:18 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.005 16:27:18 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:59.005 16:27:18 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:59.005 16:27:18 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.005 16:27:18 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.005 16:27:18 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:59.005 16:27:18 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:59.005 16:27:18 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:59.005 16:27:18 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:59.005 16:27:18 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:59.005 16:27:18 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:59.005 16:27:18 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.005 16:27:18 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.005 16:27:18 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:59.005 16:27:18 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:59.005 16:27:18 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:59.005 16:27:18 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:59.005 16:27:18 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:59.005 16:27:18 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:59.005 16:27:18 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:59.005 16:27:18 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:59.005 16:27:18 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:59.005 16:27:18 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:59.005 16:27:18 -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:59.005 16:27:18 -- ftl/restore.sh@13 -- # mktemp -d 00:17:59.005 16:27:18 -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.rY6jx3LyN4 00:17:59.005 16:27:18 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:59.005 16:27:18 -- ftl/restore.sh@16 -- # case $opt in 00:17:59.005 16:27:18 -- ftl/restore.sh@18 -- # nv_cache=0000:00:06.0 00:17:59.005 16:27:18 -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:17:59.005 16:27:18 -- ftl/restore.sh@23 -- # shift 2 00:17:59.005 16:27:18 -- ftl/restore.sh@24 -- # device=0000:00:07.0 00:17:59.005 16:27:18 -- ftl/restore.sh@25 -- # timeout=240 00:17:59.005 16:27:18 -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:17:59.005 16:27:18 -- ftl/restore.sh@39 -- # svcpid=73125 00:17:59.005 16:27:18 -- ftl/restore.sh@41 -- # waitforlisten 73125 00:17:59.005 16:27:18 -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:59.006 16:27:18 -- common/autotest_common.sh@829 -- # '[' -z 73125 ']' 00:17:59.006 16:27:18 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.006 16:27:18 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:59.006 16:27:18 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.006 16:27:18 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:59.006 16:27:18 -- common/autotest_common.sh@10 -- # set +x 00:17:59.006 [2024-11-09 16:27:18.666247] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:17:59.006 [2024-11-09 16:27:18.666533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73125 ] 00:17:59.267 [2024-11-09 16:27:18.820317] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.528 [2024-11-09 16:27:19.048017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:59.528 [2024-11-09 16:27:19.048508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.472 16:27:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:00.472 16:27:20 -- common/autotest_common.sh@862 -- # return 0 00:18:00.472 16:27:20 -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:18:00.472 16:27:20 -- ftl/common.sh@54 -- # local name=nvme0 00:18:00.472 16:27:20 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:18:00.472 16:27:20 -- ftl/common.sh@56 -- # local size=103424 00:18:00.472 16:27:20 -- ftl/common.sh@59 -- # local base_bdev 00:18:00.472 16:27:20 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:18:00.733 16:27:20 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:00.733 16:27:20 -- ftl/common.sh@62 -- # local base_size 00:18:00.733 16:27:20 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:00.733 16:27:20 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:18:00.733 16:27:20 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:00.733 16:27:20 -- common/autotest_common.sh@1369 -- # local bs 00:18:00.733 16:27:20 -- common/autotest_common.sh@1370 -- # local nb 00:18:00.733 16:27:20 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:00.996 16:27:20 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:00.997 { 00:18:00.997 "name": "nvme0n1", 00:18:00.997 "aliases": [ 00:18:00.997 "936ba798-58a4-4e80-9896-ac5b69d9944f" 00:18:00.997 ], 00:18:00.997 "product_name": "NVMe disk", 00:18:00.997 "block_size": 4096, 00:18:00.997 "num_blocks": 1310720, 00:18:00.997 "uuid": "936ba798-58a4-4e80-9896-ac5b69d9944f", 00:18:00.997 "assigned_rate_limits": { 00:18:00.997 "rw_ios_per_sec": 0, 00:18:00.997 "rw_mbytes_per_sec": 0, 00:18:00.997 "r_mbytes_per_sec": 0, 00:18:00.997 "w_mbytes_per_sec": 0 00:18:00.997 }, 00:18:00.997 "claimed": true, 00:18:00.997 "claim_type": "read_many_write_one", 00:18:00.997 "zoned": false, 00:18:00.997 "supported_io_types": { 00:18:00.997 "read": true, 00:18:00.997 "write": true, 00:18:00.997 "unmap": true, 00:18:00.997 "write_zeroes": true, 00:18:00.997 "flush": true, 00:18:00.997 "reset": true, 00:18:00.997 "compare": true, 00:18:00.997 "compare_and_write": false, 00:18:00.997 "abort": true, 00:18:00.997 "nvme_admin": true, 00:18:00.997 "nvme_io": true 00:18:00.997 }, 00:18:00.997 "driver_specific": { 00:18:00.997 "nvme": [ 00:18:00.997 { 00:18:00.997 "pci_address": "0000:00:07.0", 00:18:00.997 "trid": { 00:18:00.997 "trtype": "PCIe", 00:18:00.997 "traddr": "0000:00:07.0" 00:18:00.997 }, 00:18:00.997 "ctrlr_data": { 00:18:00.997 "cntlid": 0, 00:18:00.997 "vendor_id": "0x1b36", 00:18:00.997 "model_number": "QEMU NVMe Ctrl", 00:18:00.997 "serial_number": "12341", 00:18:00.997 "firmware_revision": "8.0.0", 00:18:00.997 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:00.997 "oacs": { 00:18:00.997 "security": 0, 00:18:00.997 "format": 1, 00:18:00.997 "firmware": 0, 00:18:00.997 "ns_manage": 1 00:18:00.997 }, 00:18:00.997 "multi_ctrlr": false, 00:18:00.997 "ana_reporting": false 00:18:00.997 }, 00:18:00.997 "vs": { 00:18:00.997 "nvme_version": "1.4" 00:18:00.997 }, 00:18:00.997 "ns_data": { 00:18:00.997 "id": 1, 00:18:00.997 "can_share": false 00:18:00.997 } 00:18:00.997 } 00:18:00.997 ], 00:18:00.997 "mp_policy": "active_passive" 00:18:00.997 } 00:18:00.997 } 00:18:00.997 ]' 00:18:00.997 16:27:20 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:00.997 16:27:20 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:00.997 16:27:20 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:00.997 16:27:20 -- common/autotest_common.sh@1373 -- # nb=1310720 00:18:00.997 16:27:20 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:18:00.997 16:27:20 -- common/autotest_common.sh@1377 -- # echo 5120 00:18:00.997 16:27:20 -- ftl/common.sh@63 -- # base_size=5120 00:18:00.997 16:27:20 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:00.997 16:27:20 -- ftl/common.sh@67 -- # clear_lvols 00:18:00.997 16:27:20 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:00.997 16:27:20 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:01.259 16:27:20 -- ftl/common.sh@28 -- # stores=271af190-802c-4120-87b6-89f4f0e09460 00:18:01.259 16:27:20 -- ftl/common.sh@29 -- # for lvs in $stores 00:18:01.260 16:27:20 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 271af190-802c-4120-87b6-89f4f0e09460 00:18:01.521 16:27:21 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:01.781 16:27:21 -- ftl/common.sh@68 -- # lvs=6f524507-c819-4902-afb8-e8bd68486407 00:18:01.781 16:27:21 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 6f524507-c819-4902-afb8-e8bd68486407 00:18:02.039 16:27:21 -- ftl/restore.sh@43 -- # split_bdev=08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 00:18:02.039 16:27:21 -- ftl/restore.sh@44 -- # '[' -n 0000:00:06.0 ']' 00:18:02.039 16:27:21 -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:06.0 08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 00:18:02.039 16:27:21 -- ftl/common.sh@35 -- # local name=nvc0 00:18:02.039 16:27:21 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:18:02.040 16:27:21 -- ftl/common.sh@37 -- # local base_bdev=08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 00:18:02.040 16:27:21 -- ftl/common.sh@38 -- # local cache_size= 00:18:02.040 16:27:21 -- ftl/common.sh@41 -- # get_bdev_size 08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 00:18:02.040 16:27:21 -- common/autotest_common.sh@1367 -- # local bdev_name=08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 00:18:02.040 16:27:21 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:02.040 16:27:21 -- common/autotest_common.sh@1369 -- # local bs 00:18:02.040 16:27:21 -- common/autotest_common.sh@1370 -- # local nb 00:18:02.040 16:27:21 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 00:18:02.298 16:27:21 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:02.298 { 00:18:02.298 "name": "08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8", 00:18:02.298 "aliases": [ 00:18:02.298 "lvs/nvme0n1p0" 00:18:02.298 ], 00:18:02.298 "product_name": "Logical Volume", 00:18:02.298 "block_size": 4096, 00:18:02.298 "num_blocks": 26476544, 00:18:02.298 "uuid": "08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8", 00:18:02.298 "assigned_rate_limits": { 00:18:02.298 "rw_ios_per_sec": 0, 00:18:02.298 "rw_mbytes_per_sec": 0, 00:18:02.298 "r_mbytes_per_sec": 0, 00:18:02.298 "w_mbytes_per_sec": 0 00:18:02.298 }, 00:18:02.298 "claimed": false, 00:18:02.298 "zoned": false, 00:18:02.298 "supported_io_types": { 00:18:02.298 "read": true, 00:18:02.298 "write": true, 00:18:02.298 "unmap": true, 00:18:02.298 "write_zeroes": true, 00:18:02.298 "flush": false, 00:18:02.298 "reset": true, 00:18:02.298 "compare": false, 00:18:02.298 "compare_and_write": false, 00:18:02.298 "abort": false, 00:18:02.298 "nvme_admin": false, 00:18:02.298 "nvme_io": false 00:18:02.298 }, 00:18:02.298 "driver_specific": { 00:18:02.298 "lvol": { 00:18:02.298 "lvol_store_uuid": "6f524507-c819-4902-afb8-e8bd68486407", 00:18:02.298 "base_bdev": "nvme0n1", 00:18:02.298 "thin_provision": true, 00:18:02.298 "snapshot": false, 00:18:02.298 "clone": false, 00:18:02.298 "esnap_clone": false 00:18:02.298 } 00:18:02.298 } 00:18:02.298 } 00:18:02.298 ]' 00:18:02.298 16:27:21 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:02.298 16:27:21 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:02.298 16:27:21 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:02.298 16:27:21 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:02.298 16:27:21 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:02.298 16:27:21 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:02.298 16:27:21 -- ftl/common.sh@41 -- # local base_size=5171 00:18:02.298 16:27:21 -- ftl/common.sh@44 -- # local nvc_bdev 00:18:02.298 16:27:21 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:18:02.557 16:27:22 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:02.557 16:27:22 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:02.557 16:27:22 -- ftl/common.sh@48 -- # get_bdev_size 08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 00:18:02.557 16:27:22 -- common/autotest_common.sh@1367 -- # local bdev_name=08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 00:18:02.557 16:27:22 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:02.557 16:27:22 -- common/autotest_common.sh@1369 -- # local bs 00:18:02.557 16:27:22 -- common/autotest_common.sh@1370 -- # local nb 00:18:02.557 16:27:22 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 00:18:02.557 16:27:22 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:02.557 { 00:18:02.557 "name": "08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8", 00:18:02.557 "aliases": [ 00:18:02.557 "lvs/nvme0n1p0" 00:18:02.557 ], 00:18:02.557 "product_name": "Logical Volume", 00:18:02.557 "block_size": 4096, 00:18:02.557 "num_blocks": 26476544, 00:18:02.557 "uuid": "08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8", 00:18:02.557 "assigned_rate_limits": { 00:18:02.557 "rw_ios_per_sec": 0, 00:18:02.557 "rw_mbytes_per_sec": 0, 00:18:02.557 "r_mbytes_per_sec": 0, 00:18:02.557 "w_mbytes_per_sec": 0 00:18:02.557 }, 00:18:02.557 "claimed": false, 00:18:02.557 "zoned": false, 00:18:02.557 "supported_io_types": { 00:18:02.557 "read": true, 00:18:02.557 "write": true, 00:18:02.557 "unmap": true, 00:18:02.557 "write_zeroes": true, 00:18:02.557 "flush": false, 00:18:02.557 "reset": true, 00:18:02.557 "compare": false, 00:18:02.557 "compare_and_write": false, 00:18:02.557 "abort": false, 00:18:02.557 "nvme_admin": false, 00:18:02.557 "nvme_io": false 00:18:02.557 }, 00:18:02.557 "driver_specific": { 00:18:02.557 "lvol": { 00:18:02.557 "lvol_store_uuid": "6f524507-c819-4902-afb8-e8bd68486407", 00:18:02.557 "base_bdev": "nvme0n1", 00:18:02.557 "thin_provision": true, 00:18:02.557 "snapshot": false, 00:18:02.557 "clone": false, 00:18:02.557 "esnap_clone": false 00:18:02.557 } 00:18:02.557 } 00:18:02.557 } 00:18:02.557 ]' 00:18:02.557 16:27:22 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:02.846 16:27:22 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:02.846 16:27:22 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:02.846 16:27:22 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:02.846 16:27:22 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:02.846 16:27:22 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:02.846 16:27:22 -- ftl/common.sh@48 -- # cache_size=5171 00:18:02.846 16:27:22 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:02.846 16:27:22 -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:02.846 16:27:22 -- ftl/restore.sh@48 -- # get_bdev_size 08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 00:18:02.846 16:27:22 -- common/autotest_common.sh@1367 -- # local bdev_name=08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 00:18:02.846 16:27:22 -- common/autotest_common.sh@1368 -- # local bdev_info 00:18:02.846 16:27:22 -- common/autotest_common.sh@1369 -- # local bs 00:18:02.846 16:27:22 -- common/autotest_common.sh@1370 -- # local nb 00:18:02.846 16:27:22 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 00:18:03.121 16:27:22 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:18:03.121 { 00:18:03.121 "name": "08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8", 00:18:03.121 "aliases": [ 00:18:03.121 "lvs/nvme0n1p0" 00:18:03.121 ], 00:18:03.121 "product_name": "Logical Volume", 00:18:03.121 "block_size": 4096, 00:18:03.121 "num_blocks": 26476544, 00:18:03.121 "uuid": "08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8", 00:18:03.121 "assigned_rate_limits": { 00:18:03.121 "rw_ios_per_sec": 0, 00:18:03.121 "rw_mbytes_per_sec": 0, 00:18:03.121 "r_mbytes_per_sec": 0, 00:18:03.121 "w_mbytes_per_sec": 0 00:18:03.121 }, 00:18:03.121 "claimed": false, 00:18:03.121 "zoned": false, 00:18:03.121 "supported_io_types": { 00:18:03.121 "read": true, 00:18:03.121 "write": true, 00:18:03.121 "unmap": true, 00:18:03.121 "write_zeroes": true, 00:18:03.121 "flush": false, 00:18:03.121 "reset": true, 00:18:03.121 "compare": false, 00:18:03.121 "compare_and_write": false, 00:18:03.121 "abort": false, 00:18:03.121 "nvme_admin": false, 00:18:03.121 "nvme_io": false 00:18:03.121 }, 00:18:03.121 "driver_specific": { 00:18:03.121 "lvol": { 00:18:03.121 "lvol_store_uuid": "6f524507-c819-4902-afb8-e8bd68486407", 00:18:03.121 "base_bdev": "nvme0n1", 00:18:03.121 "thin_provision": true, 00:18:03.121 "snapshot": false, 00:18:03.121 "clone": false, 00:18:03.121 "esnap_clone": false 00:18:03.121 } 00:18:03.121 } 00:18:03.121 } 00:18:03.121 ]' 00:18:03.121 16:27:22 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:18:03.121 16:27:22 -- common/autotest_common.sh@1372 -- # bs=4096 00:18:03.121 16:27:22 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:18:03.121 16:27:22 -- common/autotest_common.sh@1373 -- # nb=26476544 00:18:03.121 16:27:22 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:18:03.121 16:27:22 -- common/autotest_common.sh@1377 -- # echo 103424 00:18:03.121 16:27:22 -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:03.121 16:27:22 -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 --l2p_dram_limit 10' 00:18:03.121 16:27:22 -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:03.121 16:27:22 -- ftl/restore.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:18:03.121 16:27:22 -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:03.121 16:27:22 -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:03.121 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:03.121 16:27:22 -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 08b2fac3-55a0-4feb-9bb1-6cce4e05c9a8 --l2p_dram_limit 10 -c nvc0n1p0 00:18:03.380 [2024-11-09 16:27:22.973107] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.380 [2024-11-09 16:27:22.973153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:03.380 [2024-11-09 16:27:22.973166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:03.380 [2024-11-09 16:27:22.973173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.380 [2024-11-09 16:27:22.973212] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.380 [2024-11-09 16:27:22.973220] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:03.380 [2024-11-09 16:27:22.973244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:03.380 [2024-11-09 16:27:22.973250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.380 [2024-11-09 16:27:22.973265] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:03.380 [2024-11-09 16:27:22.973844] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:03.380 [2024-11-09 16:27:22.973865] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.380 [2024-11-09 16:27:22.973871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:03.380 [2024-11-09 16:27:22.973879] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.600 ms 00:18:03.380 [2024-11-09 16:27:22.973885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.380 [2024-11-09 16:27:22.973912] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID f312cd53-be96-4ae1-a0bf-7bec45782f5f 00:18:03.380 [2024-11-09 16:27:22.974847] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.380 [2024-11-09 16:27:22.974865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:03.380 [2024-11-09 16:27:22.974873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:03.380 [2024-11-09 16:27:22.974880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.380 [2024-11-09 16:27:22.979658] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.380 [2024-11-09 16:27:22.979773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:03.381 [2024-11-09 16:27:22.979786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.742 ms 00:18:03.381 [2024-11-09 16:27:22.979794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.381 [2024-11-09 16:27:22.979894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.381 [2024-11-09 16:27:22.979904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:03.381 [2024-11-09 16:27:22.979910] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:03.381 [2024-11-09 16:27:22.979920] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.381 [2024-11-09 16:27:22.979955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.381 [2024-11-09 16:27:22.979965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:03.381 [2024-11-09 16:27:22.979971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:03.381 [2024-11-09 16:27:22.979978] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.381 [2024-11-09 16:27:22.979996] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:03.381 [2024-11-09 16:27:22.982970] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.381 [2024-11-09 16:27:22.983063] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:03.381 [2024-11-09 16:27:22.983077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.977 ms 00:18:03.381 [2024-11-09 16:27:22.983083] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.381 [2024-11-09 16:27:22.983114] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.381 [2024-11-09 16:27:22.983120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:03.381 [2024-11-09 16:27:22.983127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:03.381 [2024-11-09 16:27:22.983133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.381 [2024-11-09 16:27:22.983153] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:03.381 [2024-11-09 16:27:22.983259] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:03.381 [2024-11-09 16:27:22.983272] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:03.381 [2024-11-09 16:27:22.983281] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:03.381 [2024-11-09 16:27:22.983290] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:03.381 [2024-11-09 16:27:22.983297] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:03.381 [2024-11-09 16:27:22.983306] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:03.381 [2024-11-09 16:27:22.983318] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:03.381 [2024-11-09 16:27:22.983325] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:03.381 [2024-11-09 16:27:22.983330] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:03.381 [2024-11-09 16:27:22.983337] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.381 [2024-11-09 16:27:22.983342] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:03.381 [2024-11-09 16:27:22.983349] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.185 ms 00:18:03.381 [2024-11-09 16:27:22.983354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.381 [2024-11-09 16:27:22.983403] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.381 [2024-11-09 16:27:22.983409] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:03.381 [2024-11-09 16:27:22.983416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:03.381 [2024-11-09 16:27:22.983423] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.381 [2024-11-09 16:27:22.983479] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:03.381 [2024-11-09 16:27:22.983486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:03.381 [2024-11-09 16:27:22.983493] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:03.381 [2024-11-09 16:27:22.983499] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.381 [2024-11-09 16:27:22.983506] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:03.381 [2024-11-09 16:27:22.983511] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:03.381 [2024-11-09 16:27:22.983517] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:03.381 [2024-11-09 16:27:22.983522] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:03.381 [2024-11-09 16:27:22.983528] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:03.381 [2024-11-09 16:27:22.983533] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:03.381 [2024-11-09 16:27:22.983539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:03.381 [2024-11-09 16:27:22.983544] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:03.381 [2024-11-09 16:27:22.983552] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:03.381 [2024-11-09 16:27:22.983557] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:03.381 [2024-11-09 16:27:22.983570] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:03.381 [2024-11-09 16:27:22.983575] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.381 [2024-11-09 16:27:22.983582] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:03.381 [2024-11-09 16:27:22.983588] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:03.381 [2024-11-09 16:27:22.983594] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.381 [2024-11-09 16:27:22.983600] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:03.381 [2024-11-09 16:27:22.983606] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:03.381 [2024-11-09 16:27:22.983611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:03.381 [2024-11-09 16:27:22.983618] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:03.381 [2024-11-09 16:27:22.983623] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:03.381 [2024-11-09 16:27:22.983629] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:03.381 [2024-11-09 16:27:22.983634] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:03.381 [2024-11-09 16:27:22.983640] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:03.381 [2024-11-09 16:27:22.983645] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:03.381 [2024-11-09 16:27:22.983651] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:03.381 [2024-11-09 16:27:22.983656] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:03.381 [2024-11-09 16:27:22.983662] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:03.381 [2024-11-09 16:27:22.983667] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:03.381 [2024-11-09 16:27:22.983675] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:03.381 [2024-11-09 16:27:22.983680] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:03.381 [2024-11-09 16:27:22.983687] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:03.381 [2024-11-09 16:27:22.983691] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:03.381 [2024-11-09 16:27:22.983697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:03.381 [2024-11-09 16:27:22.983702] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:03.381 [2024-11-09 16:27:22.983709] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:03.381 [2024-11-09 16:27:22.983714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:03.381 [2024-11-09 16:27:22.983720] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:03.381 [2024-11-09 16:27:22.983725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:03.381 [2024-11-09 16:27:22.983732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:03.381 [2024-11-09 16:27:22.983737] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:03.381 [2024-11-09 16:27:22.983746] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:03.381 [2024-11-09 16:27:22.983751] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:03.381 [2024-11-09 16:27:22.983757] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:03.381 [2024-11-09 16:27:22.983762] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:03.381 [2024-11-09 16:27:22.983770] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:03.381 [2024-11-09 16:27:22.983775] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:03.381 [2024-11-09 16:27:22.983782] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:03.381 [2024-11-09 16:27:22.983789] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:03.381 [2024-11-09 16:27:22.983797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:03.381 [2024-11-09 16:27:22.983803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:03.381 [2024-11-09 16:27:22.983810] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:03.381 [2024-11-09 16:27:22.983815] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:03.381 [2024-11-09 16:27:22.983821] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:03.381 [2024-11-09 16:27:22.983827] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:03.381 [2024-11-09 16:27:22.983833] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:03.381 [2024-11-09 16:27:22.983839] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:03.381 [2024-11-09 16:27:22.983845] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:03.381 [2024-11-09 16:27:22.983850] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:03.381 [2024-11-09 16:27:22.983856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:03.381 [2024-11-09 16:27:22.983862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:03.381 [2024-11-09 16:27:22.983871] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:03.382 [2024-11-09 16:27:22.983876] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:03.382 [2024-11-09 16:27:22.983883] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:03.382 [2024-11-09 16:27:22.983889] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:03.382 [2024-11-09 16:27:22.983896] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:03.382 [2024-11-09 16:27:22.983901] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:03.382 [2024-11-09 16:27:22.983907] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:03.382 [2024-11-09 16:27:22.983913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.382 [2024-11-09 16:27:22.983920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:03.382 [2024-11-09 16:27:22.983925] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:18:03.382 [2024-11-09 16:27:22.983932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.382 [2024-11-09 16:27:22.996097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.382 [2024-11-09 16:27:22.996197] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:03.382 [2024-11-09 16:27:22.996251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.125 ms 00:18:03.382 [2024-11-09 16:27:22.996271] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.382 [2024-11-09 16:27:22.996348] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.382 [2024-11-09 16:27:22.996368] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:03.382 [2024-11-09 16:27:22.996385] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:03.382 [2024-11-09 16:27:22.996402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.382 [2024-11-09 16:27:23.020589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.382 [2024-11-09 16:27:23.020684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:03.382 [2024-11-09 16:27:23.020727] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.143 ms 00:18:03.382 [2024-11-09 16:27:23.020748] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.382 [2024-11-09 16:27:23.020781] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.382 [2024-11-09 16:27:23.020800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:03.382 [2024-11-09 16:27:23.020815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:03.382 [2024-11-09 16:27:23.020833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.382 [2024-11-09 16:27:23.021140] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.382 [2024-11-09 16:27:23.021185] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:03.382 [2024-11-09 16:27:23.021201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:18:03.382 [2024-11-09 16:27:23.021217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.382 [2024-11-09 16:27:23.021331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.382 [2024-11-09 16:27:23.021397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:03.382 [2024-11-09 16:27:23.021416] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:03.382 [2024-11-09 16:27:23.021432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.382 [2024-11-09 16:27:23.033444] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.382 [2024-11-09 16:27:23.033536] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:03.382 [2024-11-09 16:27:23.033586] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.988 ms 00:18:03.382 [2024-11-09 16:27:23.033605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.382 [2024-11-09 16:27:23.043048] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:03.382 [2024-11-09 16:27:23.045424] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.382 [2024-11-09 16:27:23.045505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:03.382 [2024-11-09 16:27:23.045545] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.749 ms 00:18:03.382 [2024-11-09 16:27:23.045562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.382 [2024-11-09 16:27:23.110275] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:03.382 [2024-11-09 16:27:23.110378] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:03.382 [2024-11-09 16:27:23.110426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.681 ms 00:18:03.382 [2024-11-09 16:27:23.110444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:03.382 [2024-11-09 16:27:23.110484] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:18:03.382 [2024-11-09 16:27:23.110511] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:18:07.588 [2024-11-09 16:27:27.087265] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.588 [2024-11-09 16:27:27.087596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:07.588 [2024-11-09 16:27:27.087690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3976.752 ms 00:18:07.588 [2024-11-09 16:27:27.087719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.588 [2024-11-09 16:27:27.087954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.588 [2024-11-09 16:27:27.088001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:07.588 [2024-11-09 16:27:27.088084] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.171 ms 00:18:07.588 [2024-11-09 16:27:27.088109] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.588 [2024-11-09 16:27:27.114675] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.588 [2024-11-09 16:27:27.114866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:07.588 [2024-11-09 16:27:27.114994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.487 ms 00:18:07.588 [2024-11-09 16:27:27.115021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.588 [2024-11-09 16:27:27.140721] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.588 [2024-11-09 16:27:27.140908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:07.588 [2024-11-09 16:27:27.141000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.637 ms 00:18:07.588 [2024-11-09 16:27:27.141021] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.588 [2024-11-09 16:27:27.141410] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.588 [2024-11-09 16:27:27.141495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:07.588 [2024-11-09 16:27:27.141600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:18:07.588 [2024-11-09 16:27:27.141627] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.588 [2024-11-09 16:27:27.212681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.588 [2024-11-09 16:27:27.212859] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:07.588 [2024-11-09 16:27:27.212988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.971 ms 00:18:07.588 [2024-11-09 16:27:27.213013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.588 [2024-11-09 16:27:27.240855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.588 [2024-11-09 16:27:27.244116] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:07.588 [2024-11-09 16:27:27.244150] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.781 ms 00:18:07.588 [2024-11-09 16:27:27.244159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.588 [2024-11-09 16:27:27.245652] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.588 [2024-11-09 16:27:27.245699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:07.588 [2024-11-09 16:27:27.245715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.438 ms 00:18:07.588 [2024-11-09 16:27:27.245722] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.588 [2024-11-09 16:27:27.272422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.588 [2024-11-09 16:27:27.272471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:07.588 [2024-11-09 16:27:27.272487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.634 ms 00:18:07.588 [2024-11-09 16:27:27.272495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.588 [2024-11-09 16:27:27.272558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.588 [2024-11-09 16:27:27.272569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:07.588 [2024-11-09 16:27:27.272581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:07.588 [2024-11-09 16:27:27.272589] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.588 [2024-11-09 16:27:27.272687] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:07.588 [2024-11-09 16:27:27.272697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:07.588 [2024-11-09 16:27:27.272709] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:07.588 [2024-11-09 16:27:27.272717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:07.588 [2024-11-09 16:27:27.273919] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4300.292 ms, result 0 00:18:07.588 { 00:18:07.588 "name": "ftl0", 00:18:07.588 "uuid": "f312cd53-be96-4ae1-a0bf-7bec45782f5f" 00:18:07.588 } 00:18:07.588 16:27:27 -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:07.588 16:27:27 -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:07.847 16:27:27 -- ftl/restore.sh@63 -- # echo ']}' 00:18:07.847 16:27:27 -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:08.107 [2024-11-09 16:27:27.685096] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.107 [2024-11-09 16:27:27.685134] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:08.107 [2024-11-09 16:27:27.685157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:08.107 [2024-11-09 16:27:27.685166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.107 [2024-11-09 16:27:27.685183] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:08.107 [2024-11-09 16:27:27.687253] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.107 [2024-11-09 16:27:27.687275] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:08.107 [2024-11-09 16:27:27.687284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.055 ms 00:18:08.107 [2024-11-09 16:27:27.687296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.107 [2024-11-09 16:27:27.687496] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.107 [2024-11-09 16:27:27.687503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:08.107 [2024-11-09 16:27:27.687511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:18:08.107 [2024-11-09 16:27:27.687518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.107 [2024-11-09 16:27:27.689981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.107 [2024-11-09 16:27:27.689998] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:08.107 [2024-11-09 16:27:27.690007] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.450 ms 00:18:08.107 [2024-11-09 16:27:27.690013] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.107 [2024-11-09 16:27:27.694607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.107 [2024-11-09 16:27:27.694630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:18:08.107 [2024-11-09 16:27:27.694638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.576 ms 00:18:08.107 [2024-11-09 16:27:27.694644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.107 [2024-11-09 16:27:27.712818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.107 [2024-11-09 16:27:27.712845] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:08.107 [2024-11-09 16:27:27.712855] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.119 ms 00:18:08.107 [2024-11-09 16:27:27.712861] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.107 [2024-11-09 16:27:27.725392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.107 [2024-11-09 16:27:27.725495] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:08.107 [2024-11-09 16:27:27.725512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.499 ms 00:18:08.107 [2024-11-09 16:27:27.725519] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.107 [2024-11-09 16:27:27.725624] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.107 [2024-11-09 16:27:27.725633] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:08.107 [2024-11-09 16:27:27.725640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:18:08.107 [2024-11-09 16:27:27.725648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.107 [2024-11-09 16:27:27.743201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.107 [2024-11-09 16:27:27.743236] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:18:08.107 [2024-11-09 16:27:27.743246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.535 ms 00:18:08.107 [2024-11-09 16:27:27.743252] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.107 [2024-11-09 16:27:27.760758] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.107 [2024-11-09 16:27:27.760812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:18:08.107 [2024-11-09 16:27:27.760824] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.475 ms 00:18:08.107 [2024-11-09 16:27:27.760830] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.107 [2024-11-09 16:27:27.777964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.107 [2024-11-09 16:27:27.777989] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:08.107 [2024-11-09 16:27:27.777998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.103 ms 00:18:08.107 [2024-11-09 16:27:27.778004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.107 [2024-11-09 16:27:27.795201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.107 [2024-11-09 16:27:27.795233] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:08.107 [2024-11-09 16:27:27.795242] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.139 ms 00:18:08.107 [2024-11-09 16:27:27.795248] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.107 [2024-11-09 16:27:27.795278] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:08.107 [2024-11-09 16:27:27.795292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:08.107 [2024-11-09 16:27:27.795440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:08.108 [2024-11-09 16:27:27.795951] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:08.108 [2024-11-09 16:27:27.795958] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f312cd53-be96-4ae1-a0bf-7bec45782f5f 00:18:08.108 [2024-11-09 16:27:27.795964] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:08.108 [2024-11-09 16:27:27.795971] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:08.108 [2024-11-09 16:27:27.795976] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:08.108 [2024-11-09 16:27:27.795983] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:08.108 [2024-11-09 16:27:27.795989] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:08.108 [2024-11-09 16:27:27.795996] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:08.108 [2024-11-09 16:27:27.796001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:08.108 [2024-11-09 16:27:27.796008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:08.108 [2024-11-09 16:27:27.796012] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:08.108 [2024-11-09 16:27:27.796021] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.108 [2024-11-09 16:27:27.796026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:08.109 [2024-11-09 16:27:27.796036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.744 ms 00:18:08.109 [2024-11-09 16:27:27.796042] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.109 [2024-11-09 16:27:27.805399] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.109 [2024-11-09 16:27:27.805421] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:08.109 [2024-11-09 16:27:27.805430] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.330 ms 00:18:08.109 [2024-11-09 16:27:27.805436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.109 [2024-11-09 16:27:27.805584] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.109 [2024-11-09 16:27:27.805592] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:08.109 [2024-11-09 16:27:27.805600] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:18:08.109 [2024-11-09 16:27:27.805605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.109 [2024-11-09 16:27:27.840578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.109 [2024-11-09 16:27:27.840605] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:08.109 [2024-11-09 16:27:27.840615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.109 [2024-11-09 16:27:27.840621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.109 [2024-11-09 16:27:27.840670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.109 [2024-11-09 16:27:27.840678] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:08.109 [2024-11-09 16:27:27.840685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.109 [2024-11-09 16:27:27.840690] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.109 [2024-11-09 16:27:27.840740] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.109 [2024-11-09 16:27:27.840747] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:08.109 [2024-11-09 16:27:27.840754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.109 [2024-11-09 16:27:27.840760] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.109 [2024-11-09 16:27:27.840773] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.109 [2024-11-09 16:27:27.840779] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:08.109 [2024-11-09 16:27:27.840788] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.109 [2024-11-09 16:27:27.840793] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.367 [2024-11-09 16:27:27.898193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.367 [2024-11-09 16:27:27.898244] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:08.367 [2024-11-09 16:27:27.898255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.367 [2024-11-09 16:27:27.898263] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.367 [2024-11-09 16:27:27.920566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.367 [2024-11-09 16:27:27.920595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:08.367 [2024-11-09 16:27:27.920603] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.367 [2024-11-09 16:27:27.920609] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.367 [2024-11-09 16:27:27.920659] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.367 [2024-11-09 16:27:27.920667] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:08.367 [2024-11-09 16:27:27.920674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.367 [2024-11-09 16:27:27.920680] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.367 [2024-11-09 16:27:27.920714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.368 [2024-11-09 16:27:27.920721] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:08.368 [2024-11-09 16:27:27.920728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.368 [2024-11-09 16:27:27.920736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.368 [2024-11-09 16:27:27.920805] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.368 [2024-11-09 16:27:27.920813] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:08.368 [2024-11-09 16:27:27.920820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.368 [2024-11-09 16:27:27.920826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.368 [2024-11-09 16:27:27.920852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.368 [2024-11-09 16:27:27.920858] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:08.368 [2024-11-09 16:27:27.920865] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.368 [2024-11-09 16:27:27.920871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.368 [2024-11-09 16:27:27.920902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.368 [2024-11-09 16:27:27.920908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:08.368 [2024-11-09 16:27:27.920915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.368 [2024-11-09 16:27:27.920921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.368 [2024-11-09 16:27:27.920955] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:08.368 [2024-11-09 16:27:27.920963] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:08.368 [2024-11-09 16:27:27.920969] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:08.368 [2024-11-09 16:27:27.920976] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.368 [2024-11-09 16:27:27.921074] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 235.948 ms, result 0 00:18:08.368 true 00:18:08.368 16:27:27 -- ftl/restore.sh@66 -- # killprocess 73125 00:18:08.368 16:27:27 -- common/autotest_common.sh@936 -- # '[' -z 73125 ']' 00:18:08.368 16:27:27 -- common/autotest_common.sh@940 -- # kill -0 73125 00:18:08.368 16:27:27 -- common/autotest_common.sh@941 -- # uname 00:18:08.368 16:27:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.368 16:27:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73125 00:18:08.368 killing process with pid 73125 00:18:08.368 16:27:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:08.368 16:27:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:08.368 16:27:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73125' 00:18:08.368 16:27:27 -- common/autotest_common.sh@955 -- # kill 73125 00:18:08.368 16:27:27 -- common/autotest_common.sh@960 -- # wait 73125 00:18:12.567 16:27:31 -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:15.859 262144+0 records in 00:18:15.859 262144+0 records out 00:18:15.859 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.59952 s, 298 MB/s 00:18:15.859 16:27:35 -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:17.776 16:27:37 -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:17.776 [2024-11-09 16:27:37.296458] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:18:17.776 [2024-11-09 16:27:37.296833] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73380 ] 00:18:17.776 [2024-11-09 16:27:37.446165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.037 [2024-11-09 16:27:37.663714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.299 [2024-11-09 16:27:37.950864] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:18.299 [2024-11-09 16:27:37.950946] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:18.561 [2024-11-09 16:27:38.106155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.561 [2024-11-09 16:27:38.106406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:18.561 [2024-11-09 16:27:38.106432] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:18.561 [2024-11-09 16:27:38.106448] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.561 [2024-11-09 16:27:38.106516] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.562 [2024-11-09 16:27:38.106528] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:18.562 [2024-11-09 16:27:38.106538] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:18.562 [2024-11-09 16:27:38.106546] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.562 [2024-11-09 16:27:38.106568] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:18.562 [2024-11-09 16:27:38.107348] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:18.562 [2024-11-09 16:27:38.107378] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.562 [2024-11-09 16:27:38.107388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:18.562 [2024-11-09 16:27:38.107397] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:18:18.562 [2024-11-09 16:27:38.107405] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.562 [2024-11-09 16:27:38.109045] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:18.562 [2024-11-09 16:27:38.123104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.562 [2024-11-09 16:27:38.123153] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:18.562 [2024-11-09 16:27:38.123166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.061 ms 00:18:18.562 [2024-11-09 16:27:38.123173] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.562 [2024-11-09 16:27:38.123268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.562 [2024-11-09 16:27:38.123280] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:18.562 [2024-11-09 16:27:38.123289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:18.562 [2024-11-09 16:27:38.123296] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.562 [2024-11-09 16:27:38.131471] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.562 [2024-11-09 16:27:38.131511] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:18.562 [2024-11-09 16:27:38.131521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.098 ms 00:18:18.562 [2024-11-09 16:27:38.131530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.562 [2024-11-09 16:27:38.131622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.562 [2024-11-09 16:27:38.131631] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:18.562 [2024-11-09 16:27:38.131640] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:18.562 [2024-11-09 16:27:38.131648] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.562 [2024-11-09 16:27:38.131692] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.562 [2024-11-09 16:27:38.131702] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:18.562 [2024-11-09 16:27:38.131712] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:18.562 [2024-11-09 16:27:38.131719] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.562 [2024-11-09 16:27:38.131750] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:18.562 [2024-11-09 16:27:38.135806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.562 [2024-11-09 16:27:38.135843] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:18.562 [2024-11-09 16:27:38.135854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.070 ms 00:18:18.562 [2024-11-09 16:27:38.135862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.562 [2024-11-09 16:27:38.135898] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.562 [2024-11-09 16:27:38.135907] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:18.562 [2024-11-09 16:27:38.135915] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:18.562 [2024-11-09 16:27:38.135926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.562 [2024-11-09 16:27:38.135974] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:18.562 [2024-11-09 16:27:38.135997] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:18:18.562 [2024-11-09 16:27:38.136032] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:18.562 [2024-11-09 16:27:38.136050] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:18:18.562 [2024-11-09 16:27:38.136127] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:18:18.562 [2024-11-09 16:27:38.136138] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:18.562 [2024-11-09 16:27:38.136153] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:18:18.562 [2024-11-09 16:27:38.136166] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:18.562 [2024-11-09 16:27:38.136175] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:18.562 [2024-11-09 16:27:38.136183] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:18.562 [2024-11-09 16:27:38.136191] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:18.562 [2024-11-09 16:27:38.136199] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:18:18.562 [2024-11-09 16:27:38.136207] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:18:18.562 [2024-11-09 16:27:38.136216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.562 [2024-11-09 16:27:38.136250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:18.562 [2024-11-09 16:27:38.136259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:18:18.562 [2024-11-09 16:27:38.136267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.562 [2024-11-09 16:27:38.136332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.562 [2024-11-09 16:27:38.136343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:18.562 [2024-11-09 16:27:38.136351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:18:18.562 [2024-11-09 16:27:38.136358] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.562 [2024-11-09 16:27:38.136430] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:18.562 [2024-11-09 16:27:38.136442] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:18.562 [2024-11-09 16:27:38.136453] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:18.562 [2024-11-09 16:27:38.136462] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.562 [2024-11-09 16:27:38.136470] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:18.562 [2024-11-09 16:27:38.136477] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:18.562 [2024-11-09 16:27:38.136485] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:18.562 [2024-11-09 16:27:38.136493] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:18.562 [2024-11-09 16:27:38.136501] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:18.562 [2024-11-09 16:27:38.136508] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:18.562 [2024-11-09 16:27:38.136517] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:18.562 [2024-11-09 16:27:38.136525] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:18.562 [2024-11-09 16:27:38.136532] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:18.562 [2024-11-09 16:27:38.136539] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:18.562 [2024-11-09 16:27:38.136554] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:18:18.562 [2024-11-09 16:27:38.136561] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.562 [2024-11-09 16:27:38.136575] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:18.562 [2024-11-09 16:27:38.136582] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:18:18.562 [2024-11-09 16:27:38.136591] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.562 [2024-11-09 16:27:38.136599] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:18:18.562 [2024-11-09 16:27:38.136606] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:18:18.562 [2024-11-09 16:27:38.136613] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:18:18.562 [2024-11-09 16:27:38.136620] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:18.562 [2024-11-09 16:27:38.136627] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:18.562 [2024-11-09 16:27:38.136634] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:18.562 [2024-11-09 16:27:38.136642] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:18.562 [2024-11-09 16:27:38.136650] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:18:18.562 [2024-11-09 16:27:38.136657] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:18.562 [2024-11-09 16:27:38.136663] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:18.562 [2024-11-09 16:27:38.136670] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:18.562 [2024-11-09 16:27:38.136677] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:18.562 [2024-11-09 16:27:38.136683] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:18.562 [2024-11-09 16:27:38.136690] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:18:18.562 [2024-11-09 16:27:38.136697] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:18:18.562 [2024-11-09 16:27:38.136705] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:18.562 [2024-11-09 16:27:38.136712] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:18.562 [2024-11-09 16:27:38.136718] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:18.562 [2024-11-09 16:27:38.136725] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:18.563 [2024-11-09 16:27:38.136732] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:18:18.563 [2024-11-09 16:27:38.136738] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:18.563 [2024-11-09 16:27:38.136745] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:18.563 [2024-11-09 16:27:38.136755] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:18.563 [2024-11-09 16:27:38.136764] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:18.563 [2024-11-09 16:27:38.136773] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:18.563 [2024-11-09 16:27:38.136780] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:18.563 [2024-11-09 16:27:38.136787] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:18.563 [2024-11-09 16:27:38.136793] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:18.563 [2024-11-09 16:27:38.136800] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:18.563 [2024-11-09 16:27:38.136806] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:18.563 [2024-11-09 16:27:38.136813] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:18.563 [2024-11-09 16:27:38.136820] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:18.563 [2024-11-09 16:27:38.136831] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:18.563 [2024-11-09 16:27:38.136840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:18.563 [2024-11-09 16:27:38.136847] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:18:18.563 [2024-11-09 16:27:38.136854] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:18:18.563 [2024-11-09 16:27:38.136861] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:18:18.563 [2024-11-09 16:27:38.136869] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:18:18.563 [2024-11-09 16:27:38.136879] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:18:18.563 [2024-11-09 16:27:38.136887] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:18:18.563 [2024-11-09 16:27:38.136894] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:18:18.563 [2024-11-09 16:27:38.136902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:18:18.563 [2024-11-09 16:27:38.136908] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:18:18.563 [2024-11-09 16:27:38.136915] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:18:18.563 [2024-11-09 16:27:38.136923] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:18:18.563 [2024-11-09 16:27:38.136930] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:18:18.563 [2024-11-09 16:27:38.136938] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:18.563 [2024-11-09 16:27:38.136947] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:18.563 [2024-11-09 16:27:38.136955] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:18.563 [2024-11-09 16:27:38.136962] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:18.563 [2024-11-09 16:27:38.136969] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:18.563 [2024-11-09 16:27:38.136976] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:18.563 [2024-11-09 16:27:38.136983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.136993] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:18.563 [2024-11-09 16:27:38.137001] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:18:18.563 [2024-11-09 16:27:38.137010] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.563 [2024-11-09 16:27:38.155043] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.155092] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:18.563 [2024-11-09 16:27:38.155105] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.990 ms 00:18:18.563 [2024-11-09 16:27:38.155120] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.563 [2024-11-09 16:27:38.155213] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.155252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:18.563 [2024-11-09 16:27:38.155261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:18.563 [2024-11-09 16:27:38.155269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.563 [2024-11-09 16:27:38.199248] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.199299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:18.563 [2024-11-09 16:27:38.199311] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.926 ms 00:18:18.563 [2024-11-09 16:27:38.199320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.563 [2024-11-09 16:27:38.199368] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.199379] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:18.563 [2024-11-09 16:27:38.199388] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:18.563 [2024-11-09 16:27:38.199396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.563 [2024-11-09 16:27:38.199921] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.199945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:18.563 [2024-11-09 16:27:38.199956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:18:18.563 [2024-11-09 16:27:38.199970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.563 [2024-11-09 16:27:38.200097] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.200108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:18.563 [2024-11-09 16:27:38.200117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:18.563 [2024-11-09 16:27:38.200125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.563 [2024-11-09 16:27:38.216487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.216529] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:18.563 [2024-11-09 16:27:38.216540] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.340 ms 00:18:18.563 [2024-11-09 16:27:38.216548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.563 [2024-11-09 16:27:38.230651] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:18.563 [2024-11-09 16:27:38.230701] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:18.563 [2024-11-09 16:27:38.230714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.230723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:18.563 [2024-11-09 16:27:38.230734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.061 ms 00:18:18.563 [2024-11-09 16:27:38.230742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.563 [2024-11-09 16:27:38.256634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.256684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:18.563 [2024-11-09 16:27:38.256698] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.842 ms 00:18:18.563 [2024-11-09 16:27:38.256706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.563 [2024-11-09 16:27:38.269442] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.269488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:18.563 [2024-11-09 16:27:38.269500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.703 ms 00:18:18.563 [2024-11-09 16:27:38.269507] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.563 [2024-11-09 16:27:38.281597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.281640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:18.563 [2024-11-09 16:27:38.281661] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.043 ms 00:18:18.563 [2024-11-09 16:27:38.281668] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.563 [2024-11-09 16:27:38.282049] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.563 [2024-11-09 16:27:38.282064] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:18.563 [2024-11-09 16:27:38.282073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:18:18.563 [2024-11-09 16:27:38.282081] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.825 [2024-11-09 16:27:38.347282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.825 [2024-11-09 16:27:38.347343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:18.825 [2024-11-09 16:27:38.347360] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.183 ms 00:18:18.825 [2024-11-09 16:27:38.347368] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.825 [2024-11-09 16:27:38.359065] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:18.825 [2024-11-09 16:27:38.362081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.825 [2024-11-09 16:27:38.362125] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:18.825 [2024-11-09 16:27:38.362138] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.645 ms 00:18:18.825 [2024-11-09 16:27:38.362147] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.826 [2024-11-09 16:27:38.362240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.826 [2024-11-09 16:27:38.362252] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:18.826 [2024-11-09 16:27:38.362262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:18:18.826 [2024-11-09 16:27:38.362270] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.826 [2024-11-09 16:27:38.362346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.826 [2024-11-09 16:27:38.362358] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:18.826 [2024-11-09 16:27:38.362367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:18.826 [2024-11-09 16:27:38.362376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.826 [2024-11-09 16:27:38.363696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.826 [2024-11-09 16:27:38.363743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:18:18.826 [2024-11-09 16:27:38.363754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.303 ms 00:18:18.826 [2024-11-09 16:27:38.363762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.826 [2024-11-09 16:27:38.363796] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.826 [2024-11-09 16:27:38.363805] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:18.826 [2024-11-09 16:27:38.363814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:18.826 [2024-11-09 16:27:38.363828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.826 [2024-11-09 16:27:38.363864] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:18.826 [2024-11-09 16:27:38.363874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.826 [2024-11-09 16:27:38.363883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:18.826 [2024-11-09 16:27:38.363893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:18.826 [2024-11-09 16:27:38.363901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.826 [2024-11-09 16:27:38.389670] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.826 [2024-11-09 16:27:38.389732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:18.826 [2024-11-09 16:27:38.389745] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.749 ms 00:18:18.826 [2024-11-09 16:27:38.389754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.826 [2024-11-09 16:27:38.389834] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:18.826 [2024-11-09 16:27:38.389850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:18.826 [2024-11-09 16:27:38.389859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:18.826 [2024-11-09 16:27:38.389867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:18.826 [2024-11-09 16:27:38.391103] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 284.481 ms, result 0 00:18:19.766  [2024-11-09T16:27:40.473Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-09T16:27:41.418Z] Copying: 35/1024 [MB] (22 MBps) [2024-11-09T16:27:42.806Z] Copying: 49/1024 [MB] (13 MBps) [2024-11-09T16:27:43.780Z] Copying: 62/1024 [MB] (13 MBps) [2024-11-09T16:27:44.723Z] Copying: 74/1024 [MB] (11 MBps) [2024-11-09T16:27:45.660Z] Copying: 84/1024 [MB] (10 MBps) [2024-11-09T16:27:46.593Z] Copying: 98/1024 [MB] (13 MBps) [2024-11-09T16:27:47.526Z] Copying: 122/1024 [MB] (23 MBps) [2024-11-09T16:27:48.466Z] Copying: 143/1024 [MB] (21 MBps) [2024-11-09T16:27:49.407Z] Copying: 160/1024 [MB] (16 MBps) [2024-11-09T16:27:50.780Z] Copying: 172/1024 [MB] (12 MBps) [2024-11-09T16:27:51.714Z] Copying: 193/1024 [MB] (21 MBps) [2024-11-09T16:27:52.649Z] Copying: 215/1024 [MB] (22 MBps) [2024-11-09T16:27:53.595Z] Copying: 244/1024 [MB] (28 MBps) [2024-11-09T16:27:54.539Z] Copying: 262/1024 [MB] (17 MBps) [2024-11-09T16:27:55.480Z] Copying: 279/1024 [MB] (16 MBps) [2024-11-09T16:27:56.423Z] Copying: 289/1024 [MB] (10 MBps) [2024-11-09T16:27:57.799Z] Copying: 299/1024 [MB] (10 MBps) [2024-11-09T16:27:58.732Z] Copying: 324/1024 [MB] (24 MBps) [2024-11-09T16:27:59.666Z] Copying: 347/1024 [MB] (22 MBps) [2024-11-09T16:28:00.601Z] Copying: 366/1024 [MB] (18 MBps) [2024-11-09T16:28:01.587Z] Copying: 389/1024 [MB] (22 MBps) [2024-11-09T16:28:02.530Z] Copying: 416/1024 [MB] (26 MBps) [2024-11-09T16:28:03.473Z] Copying: 426/1024 [MB] (10 MBps) [2024-11-09T16:28:04.408Z] Copying: 438/1024 [MB] (11 MBps) [2024-11-09T16:28:05.781Z] Copying: 456/1024 [MB] (17 MBps) [2024-11-09T16:28:06.722Z] Copying: 475/1024 [MB] (19 MBps) [2024-11-09T16:28:07.668Z] Copying: 492/1024 [MB] (16 MBps) [2024-11-09T16:28:08.612Z] Copying: 505/1024 [MB] (13 MBps) [2024-11-09T16:28:09.553Z] Copying: 520/1024 [MB] (14 MBps) [2024-11-09T16:28:10.488Z] Copying: 539/1024 [MB] (18 MBps) [2024-11-09T16:28:11.426Z] Copying: 573/1024 [MB] (34 MBps) [2024-11-09T16:28:12.809Z] Copying: 589/1024 [MB] (15 MBps) [2024-11-09T16:28:13.747Z] Copying: 610/1024 [MB] (21 MBps) [2024-11-09T16:28:14.680Z] Copying: 621/1024 [MB] (10 MBps) [2024-11-09T16:28:15.615Z] Copying: 640/1024 [MB] (18 MBps) [2024-11-09T16:28:16.555Z] Copying: 659/1024 [MB] (18 MBps) [2024-11-09T16:28:17.489Z] Copying: 676/1024 [MB] (17 MBps) [2024-11-09T16:28:18.492Z] Copying: 696/1024 [MB] (19 MBps) [2024-11-09T16:28:19.426Z] Copying: 715/1024 [MB] (18 MBps) [2024-11-09T16:28:20.799Z] Copying: 746/1024 [MB] (31 MBps) [2024-11-09T16:28:21.732Z] Copying: 765/1024 [MB] (19 MBps) [2024-11-09T16:28:22.665Z] Copying: 783/1024 [MB] (17 MBps) [2024-11-09T16:28:23.604Z] Copying: 803/1024 [MB] (19 MBps) [2024-11-09T16:28:24.546Z] Copying: 832/1024 [MB] (29 MBps) [2024-11-09T16:28:25.479Z] Copying: 846/1024 [MB] (13 MBps) [2024-11-09T16:28:26.423Z] Copying: 866/1024 [MB] (20 MBps) [2024-11-09T16:28:27.800Z] Copying: 876/1024 [MB] (10 MBps) [2024-11-09T16:28:28.745Z] Copying: 905/1024 [MB] (29 MBps) [2024-11-09T16:28:29.689Z] Copying: 923/1024 [MB] (17 MBps) [2024-11-09T16:28:30.630Z] Copying: 936/1024 [MB] (13 MBps) [2024-11-09T16:28:31.573Z] Copying: 956/1024 [MB] (19 MBps) [2024-11-09T16:28:32.515Z] Copying: 971/1024 [MB] (15 MBps) [2024-11-09T16:28:33.455Z] Copying: 985/1024 [MB] (13 MBps) [2024-11-09T16:28:34.839Z] Copying: 1001/1024 [MB] (16 MBps) [2024-11-09T16:28:35.101Z] Copying: 1013/1024 [MB] (12 MBps) [2024-11-09T16:28:35.101Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-09 16:28:35.046578] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.331 [2024-11-09 16:28:35.046641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:15.331 [2024-11-09 16:28:35.046657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:15.331 [2024-11-09 16:28:35.046665] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.331 [2024-11-09 16:28:35.046688] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:15.331 [2024-11-09 16:28:35.049799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.331 [2024-11-09 16:28:35.050012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:15.331 [2024-11-09 16:28:35.050045] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.093 ms 00:19:15.331 [2024-11-09 16:28:35.050053] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.331 [2024-11-09 16:28:35.053279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.331 [2024-11-09 16:28:35.053327] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:15.331 [2024-11-09 16:28:35.053338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.191 ms 00:19:15.331 [2024-11-09 16:28:35.053346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.331 [2024-11-09 16:28:35.071445] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.331 [2024-11-09 16:28:35.071510] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:15.331 [2024-11-09 16:28:35.071523] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.081 ms 00:19:15.331 [2024-11-09 16:28:35.071540] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.331 [2024-11-09 16:28:35.077663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.331 [2024-11-09 16:28:35.077704] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:19:15.331 [2024-11-09 16:28:35.077715] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.078 ms 00:19:15.331 [2024-11-09 16:28:35.077723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.601 [2024-11-09 16:28:35.105300] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.601 [2024-11-09 16:28:35.105350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:15.601 [2024-11-09 16:28:35.105364] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.501 ms 00:19:15.601 [2024-11-09 16:28:35.105371] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.601 [2024-11-09 16:28:35.121698] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.601 [2024-11-09 16:28:35.121744] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:15.601 [2024-11-09 16:28:35.121758] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.275 ms 00:19:15.601 [2024-11-09 16:28:35.121766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.601 [2024-11-09 16:28:35.121913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.601 [2024-11-09 16:28:35.121924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:15.601 [2024-11-09 16:28:35.121934] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:15.601 [2024-11-09 16:28:35.121942] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.601 [2024-11-09 16:28:35.148934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.601 [2024-11-09 16:28:35.149154] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:19:15.601 [2024-11-09 16:28:35.149178] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.976 ms 00:19:15.601 [2024-11-09 16:28:35.149186] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.601 [2024-11-09 16:28:35.175386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.601 [2024-11-09 16:28:35.175585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:19:15.601 [2024-11-09 16:28:35.175606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.101 ms 00:19:15.601 [2024-11-09 16:28:35.175630] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.601 [2024-11-09 16:28:35.209703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.601 [2024-11-09 16:28:35.209759] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:15.601 [2024-11-09 16:28:35.209774] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.792 ms 00:19:15.601 [2024-11-09 16:28:35.209781] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.601 [2024-11-09 16:28:35.235082] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.601 [2024-11-09 16:28:35.235131] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:15.601 [2024-11-09 16:28:35.235143] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.191 ms 00:19:15.601 [2024-11-09 16:28:35.235149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.601 [2024-11-09 16:28:35.235195] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:15.601 [2024-11-09 16:28:35.235211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:15.601 [2024-11-09 16:28:35.235631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.235995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:15.602 [2024-11-09 16:28:35.236011] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:15.602 [2024-11-09 16:28:35.236020] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f312cd53-be96-4ae1-a0bf-7bec45782f5f 00:19:15.602 [2024-11-09 16:28:35.236028] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:15.602 [2024-11-09 16:28:35.236036] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:15.602 [2024-11-09 16:28:35.236043] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:15.602 [2024-11-09 16:28:35.236051] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:15.602 [2024-11-09 16:28:35.236059] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:15.602 [2024-11-09 16:28:35.236068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:15.602 [2024-11-09 16:28:35.236075] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:15.602 [2024-11-09 16:28:35.236082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:15.602 [2024-11-09 16:28:35.236095] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:15.602 [2024-11-09 16:28:35.236103] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.602 [2024-11-09 16:28:35.236112] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:15.602 [2024-11-09 16:28:35.236121] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.910 ms 00:19:15.602 [2024-11-09 16:28:35.236132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.602 [2024-11-09 16:28:35.249568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.602 [2024-11-09 16:28:35.249755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:15.602 [2024-11-09 16:28:35.249775] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.398 ms 00:19:15.602 [2024-11-09 16:28:35.249782] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.602 [2024-11-09 16:28:35.250012] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.602 [2024-11-09 16:28:35.250022] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:15.602 [2024-11-09 16:28:35.250039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:19:15.602 [2024-11-09 16:28:35.250047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.602 [2024-11-09 16:28:35.289423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.602 [2024-11-09 16:28:35.289470] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:15.602 [2024-11-09 16:28:35.289480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.602 [2024-11-09 16:28:35.289490] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.602 [2024-11-09 16:28:35.289559] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.602 [2024-11-09 16:28:35.289568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:15.602 [2024-11-09 16:28:35.289583] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.602 [2024-11-09 16:28:35.289591] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.602 [2024-11-09 16:28:35.289672] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.602 [2024-11-09 16:28:35.289683] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:15.602 [2024-11-09 16:28:35.289691] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.602 [2024-11-09 16:28:35.289699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.602 [2024-11-09 16:28:35.289715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.602 [2024-11-09 16:28:35.289723] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:15.602 [2024-11-09 16:28:35.289731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.602 [2024-11-09 16:28:35.289742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.927 [2024-11-09 16:28:35.370068] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.927 [2024-11-09 16:28:35.370119] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:15.927 [2024-11-09 16:28:35.370132] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.927 [2024-11-09 16:28:35.370143] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.927 [2024-11-09 16:28:35.401696] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.927 [2024-11-09 16:28:35.401743] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:15.927 [2024-11-09 16:28:35.401754] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.927 [2024-11-09 16:28:35.401770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.927 [2024-11-09 16:28:35.401841] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.927 [2024-11-09 16:28:35.401850] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:15.927 [2024-11-09 16:28:35.401859] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.927 [2024-11-09 16:28:35.401867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.927 [2024-11-09 16:28:35.401910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.927 [2024-11-09 16:28:35.401921] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:15.927 [2024-11-09 16:28:35.401929] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.927 [2024-11-09 16:28:35.401938] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.927 [2024-11-09 16:28:35.402045] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.927 [2024-11-09 16:28:35.402056] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:15.927 [2024-11-09 16:28:35.402065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.927 [2024-11-09 16:28:35.402073] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.927 [2024-11-09 16:28:35.402105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.927 [2024-11-09 16:28:35.402115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:15.927 [2024-11-09 16:28:35.402123] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.927 [2024-11-09 16:28:35.402131] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.927 [2024-11-09 16:28:35.402177] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.927 [2024-11-09 16:28:35.402188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:15.927 [2024-11-09 16:28:35.402197] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.927 [2024-11-09 16:28:35.402205] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.927 [2024-11-09 16:28:35.402288] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:15.927 [2024-11-09 16:28:35.402300] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:15.927 [2024-11-09 16:28:35.402309] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:15.927 [2024-11-09 16:28:35.402318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.927 [2024-11-09 16:28:35.402455] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 355.839 ms, result 0 00:19:16.871 00:19:16.871 00:19:16.871 16:28:36 -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:16.871 [2024-11-09 16:28:36.464279] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:19:16.871 [2024-11-09 16:28:36.464421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73996 ] 00:19:16.871 [2024-11-09 16:28:36.617680] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.132 [2024-11-09 16:28:36.835517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:17.393 [2024-11-09 16:28:37.122237] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.393 [2024-11-09 16:28:37.122312] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:17.656 [2024-11-09 16:28:37.277477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.656 [2024-11-09 16:28:37.277534] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:17.656 [2024-11-09 16:28:37.277550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:17.656 [2024-11-09 16:28:37.277562] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.656 [2024-11-09 16:28:37.277616] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.656 [2024-11-09 16:28:37.277627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:17.656 [2024-11-09 16:28:37.277636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:17.656 [2024-11-09 16:28:37.277644] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.656 [2024-11-09 16:28:37.277665] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:17.656 [2024-11-09 16:28:37.278524] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:17.656 [2024-11-09 16:28:37.278551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.656 [2024-11-09 16:28:37.278559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:17.656 [2024-11-09 16:28:37.278569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.891 ms 00:19:17.656 [2024-11-09 16:28:37.278577] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.656 [2024-11-09 16:28:37.280431] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:17.656 [2024-11-09 16:28:37.294525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.656 [2024-11-09 16:28:37.294574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:17.656 [2024-11-09 16:28:37.294588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.097 ms 00:19:17.656 [2024-11-09 16:28:37.294598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.656 [2024-11-09 16:28:37.294681] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.656 [2024-11-09 16:28:37.294692] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:17.656 [2024-11-09 16:28:37.294701] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:17.656 [2024-11-09 16:28:37.294710] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.656 [2024-11-09 16:28:37.302874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.656 [2024-11-09 16:28:37.303090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:17.656 [2024-11-09 16:28:37.303110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.089 ms 00:19:17.656 [2024-11-09 16:28:37.303118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.656 [2024-11-09 16:28:37.303219] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.656 [2024-11-09 16:28:37.303256] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:17.656 [2024-11-09 16:28:37.303266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:19:17.656 [2024-11-09 16:28:37.303274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.656 [2024-11-09 16:28:37.303326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.656 [2024-11-09 16:28:37.303336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:17.656 [2024-11-09 16:28:37.303345] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:17.656 [2024-11-09 16:28:37.303353] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.656 [2024-11-09 16:28:37.303384] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:17.656 [2024-11-09 16:28:37.307540] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.656 [2024-11-09 16:28:37.307576] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:17.656 [2024-11-09 16:28:37.307587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.168 ms 00:19:17.656 [2024-11-09 16:28:37.307595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.656 [2024-11-09 16:28:37.307635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.656 [2024-11-09 16:28:37.307643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:17.656 [2024-11-09 16:28:37.307652] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:17.656 [2024-11-09 16:28:37.307662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.656 [2024-11-09 16:28:37.307714] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:17.656 [2024-11-09 16:28:37.307736] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:19:17.656 [2024-11-09 16:28:37.307770] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:17.656 [2024-11-09 16:28:37.307786] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:19:17.656 [2024-11-09 16:28:37.307861] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:19:17.656 [2024-11-09 16:28:37.307872] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:17.656 [2024-11-09 16:28:37.307885] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:19:17.656 [2024-11-09 16:28:37.307897] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:17.656 [2024-11-09 16:28:37.307907] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:17.656 [2024-11-09 16:28:37.307915] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:17.656 [2024-11-09 16:28:37.307923] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:17.656 [2024-11-09 16:28:37.307931] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:19:17.656 [2024-11-09 16:28:37.307938] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:19:17.656 [2024-11-09 16:28:37.307947] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.656 [2024-11-09 16:28:37.307955] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:17.656 [2024-11-09 16:28:37.307963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.236 ms 00:19:17.656 [2024-11-09 16:28:37.307971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.656 [2024-11-09 16:28:37.308034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.656 [2024-11-09 16:28:37.308044] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:17.656 [2024-11-09 16:28:37.308052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:17.656 [2024-11-09 16:28:37.308060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.656 [2024-11-09 16:28:37.308130] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:17.656 [2024-11-09 16:28:37.308140] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:17.656 [2024-11-09 16:28:37.308148] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.656 [2024-11-09 16:28:37.308156] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.656 [2024-11-09 16:28:37.308165] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:17.656 [2024-11-09 16:28:37.308172] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:17.656 [2024-11-09 16:28:37.308179] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:17.656 [2024-11-09 16:28:37.308187] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:17.656 [2024-11-09 16:28:37.308195] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:17.656 [2024-11-09 16:28:37.308202] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.656 [2024-11-09 16:28:37.308213] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:17.656 [2024-11-09 16:28:37.308246] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:17.656 [2024-11-09 16:28:37.308255] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:17.656 [2024-11-09 16:28:37.308263] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:17.656 [2024-11-09 16:28:37.308271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:19:17.656 [2024-11-09 16:28:37.308278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.656 [2024-11-09 16:28:37.308293] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:17.656 [2024-11-09 16:28:37.308301] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:19:17.656 [2024-11-09 16:28:37.308308] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.656 [2024-11-09 16:28:37.308315] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:19:17.656 [2024-11-09 16:28:37.308322] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:19:17.656 [2024-11-09 16:28:37.308329] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:19:17.656 [2024-11-09 16:28:37.308336] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:17.656 [2024-11-09 16:28:37.308344] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:17.656 [2024-11-09 16:28:37.308351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:17.656 [2024-11-09 16:28:37.308358] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:17.656 [2024-11-09 16:28:37.308366] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:19:17.656 [2024-11-09 16:28:37.308373] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:17.656 [2024-11-09 16:28:37.308380] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:17.656 [2024-11-09 16:28:37.308387] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:17.656 [2024-11-09 16:28:37.308395] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:17.656 [2024-11-09 16:28:37.308401] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:17.656 [2024-11-09 16:28:37.308408] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:19:17.656 [2024-11-09 16:28:37.308415] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:19:17.656 [2024-11-09 16:28:37.308422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:17.656 [2024-11-09 16:28:37.308429] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:17.656 [2024-11-09 16:28:37.308436] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.656 [2024-11-09 16:28:37.308443] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:17.657 [2024-11-09 16:28:37.308449] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:19:17.657 [2024-11-09 16:28:37.308456] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:17.657 [2024-11-09 16:28:37.308462] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:17.657 [2024-11-09 16:28:37.308472] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:17.657 [2024-11-09 16:28:37.308482] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:17.657 [2024-11-09 16:28:37.308491] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:17.657 [2024-11-09 16:28:37.308500] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:17.657 [2024-11-09 16:28:37.308507] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:17.657 [2024-11-09 16:28:37.308514] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:17.657 [2024-11-09 16:28:37.308521] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:17.657 [2024-11-09 16:28:37.308528] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:17.657 [2024-11-09 16:28:37.308535] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:17.657 [2024-11-09 16:28:37.308543] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:17.657 [2024-11-09 16:28:37.308553] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.657 [2024-11-09 16:28:37.308562] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:17.657 [2024-11-09 16:28:37.308570] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:19:17.657 [2024-11-09 16:28:37.308578] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:19:17.657 [2024-11-09 16:28:37.308585] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:19:17.657 [2024-11-09 16:28:37.308592] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:19:17.657 [2024-11-09 16:28:37.308615] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:19:17.657 [2024-11-09 16:28:37.308622] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:19:17.657 [2024-11-09 16:28:37.308629] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:19:17.657 [2024-11-09 16:28:37.308636] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:19:17.657 [2024-11-09 16:28:37.308644] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:19:17.657 [2024-11-09 16:28:37.308651] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:19:17.657 [2024-11-09 16:28:37.308658] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:19:17.657 [2024-11-09 16:28:37.308666] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:19:17.657 [2024-11-09 16:28:37.308673] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:17.657 [2024-11-09 16:28:37.308682] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:17.657 [2024-11-09 16:28:37.308690] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:17.657 [2024-11-09 16:28:37.308697] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:17.657 [2024-11-09 16:28:37.308704] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:17.657 [2024-11-09 16:28:37.308710] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:17.657 [2024-11-09 16:28:37.308718] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.657 [2024-11-09 16:28:37.308726] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:17.657 [2024-11-09 16:28:37.308734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:19:17.657 [2024-11-09 16:28:37.308743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.657 [2024-11-09 16:28:37.327008] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.657 [2024-11-09 16:28:37.327058] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:17.657 [2024-11-09 16:28:37.327073] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.223 ms 00:19:17.657 [2024-11-09 16:28:37.327087] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.657 [2024-11-09 16:28:37.327181] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.657 [2024-11-09 16:28:37.327192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:17.657 [2024-11-09 16:28:37.327201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:17.657 [2024-11-09 16:28:37.327211] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.657 [2024-11-09 16:28:37.373506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.657 [2024-11-09 16:28:37.373699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:17.657 [2024-11-09 16:28:37.373721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.219 ms 00:19:17.657 [2024-11-09 16:28:37.373731] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.657 [2024-11-09 16:28:37.373783] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.657 [2024-11-09 16:28:37.373794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:17.657 [2024-11-09 16:28:37.373803] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:17.657 [2024-11-09 16:28:37.373811] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.657 [2024-11-09 16:28:37.374422] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.657 [2024-11-09 16:28:37.374454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:17.657 [2024-11-09 16:28:37.374465] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:19:17.657 [2024-11-09 16:28:37.374480] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.657 [2024-11-09 16:28:37.374609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.657 [2024-11-09 16:28:37.374619] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:17.657 [2024-11-09 16:28:37.374629] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:19:17.657 [2024-11-09 16:28:37.374636] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.657 [2024-11-09 16:28:37.391193] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.657 [2024-11-09 16:28:37.391265] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:17.657 [2024-11-09 16:28:37.391277] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.528 ms 00:19:17.657 [2024-11-09 16:28:37.391285] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.657 [2024-11-09 16:28:37.405339] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:17.657 [2024-11-09 16:28:37.405385] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:17.657 [2024-11-09 16:28:37.405397] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.657 [2024-11-09 16:28:37.405405] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:17.657 [2024-11-09 16:28:37.405417] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.004 ms 00:19:17.657 [2024-11-09 16:28:37.405424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.431645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.431693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:17.919 [2024-11-09 16:28:37.431706] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.168 ms 00:19:17.919 [2024-11-09 16:28:37.431714] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.444575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.444621] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:17.919 [2024-11-09 16:28:37.444634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.807 ms 00:19:17.919 [2024-11-09 16:28:37.444641] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.457085] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.457140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:17.919 [2024-11-09 16:28:37.457164] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.395 ms 00:19:17.919 [2024-11-09 16:28:37.457171] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.457587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.457601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:17.919 [2024-11-09 16:28:37.457612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:19:17.919 [2024-11-09 16:28:37.457619] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.524350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.524410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:17.919 [2024-11-09 16:28:37.524428] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.708 ms 00:19:17.919 [2024-11-09 16:28:37.524438] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.536213] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:17.919 [2024-11-09 16:28:37.539380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.539563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:17.919 [2024-11-09 16:28:37.539585] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.877 ms 00:19:17.919 [2024-11-09 16:28:37.539603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.539683] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.539695] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:17.919 [2024-11-09 16:28:37.539705] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:17.919 [2024-11-09 16:28:37.539713] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.539784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.539795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:17.919 [2024-11-09 16:28:37.539804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:17.919 [2024-11-09 16:28:37.539812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.541168] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.541214] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:19:17.919 [2024-11-09 16:28:37.541245] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.333 ms 00:19:17.919 [2024-11-09 16:28:37.541254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.541292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.541302] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:17.919 [2024-11-09 16:28:37.541317] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:17.919 [2024-11-09 16:28:37.541325] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.541364] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:17.919 [2024-11-09 16:28:37.541375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.541386] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:17.919 [2024-11-09 16:28:37.541394] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:17.919 [2024-11-09 16:28:37.541401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.567780] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.567846] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:17.919 [2024-11-09 16:28:37.567862] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.358 ms 00:19:17.919 [2024-11-09 16:28:37.567871] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.567964] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:17.919 [2024-11-09 16:28:37.567975] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:17.919 [2024-11-09 16:28:37.567983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:17.919 [2024-11-09 16:28:37.567992] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:17.919 [2024-11-09 16:28:37.569306] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 291.275 ms, result 0 00:19:19.320  [2024-11-09T16:28:40.034Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-09T16:28:40.978Z] Copying: 27/1024 [MB] (10 MBps) [2024-11-09T16:28:41.920Z] Copying: 38/1024 [MB] (10 MBps) [2024-11-09T16:28:42.863Z] Copying: 49/1024 [MB] (10 MBps) [2024-11-09T16:28:43.807Z] Copying: 59/1024 [MB] (10 MBps) [2024-11-09T16:28:45.191Z] Copying: 70/1024 [MB] (11 MBps) [2024-11-09T16:28:45.765Z] Copying: 95/1024 [MB] (24 MBps) [2024-11-09T16:28:47.147Z] Copying: 114/1024 [MB] (19 MBps) [2024-11-09T16:28:48.091Z] Copying: 139/1024 [MB] (24 MBps) [2024-11-09T16:28:49.036Z] Copying: 158/1024 [MB] (19 MBps) [2024-11-09T16:28:49.979Z] Copying: 169/1024 [MB] (10 MBps) [2024-11-09T16:28:50.924Z] Copying: 179/1024 [MB] (10 MBps) [2024-11-09T16:28:51.867Z] Copying: 194/1024 [MB] (14 MBps) [2024-11-09T16:28:52.862Z] Copying: 208/1024 [MB] (13 MBps) [2024-11-09T16:28:53.806Z] Copying: 225/1024 [MB] (17 MBps) [2024-11-09T16:28:55.194Z] Copying: 237/1024 [MB] (11 MBps) [2024-11-09T16:28:55.766Z] Copying: 250/1024 [MB] (12 MBps) [2024-11-09T16:28:57.153Z] Copying: 262/1024 [MB] (12 MBps) [2024-11-09T16:28:58.097Z] Copying: 282/1024 [MB] (20 MBps) [2024-11-09T16:28:59.042Z] Copying: 302/1024 [MB] (20 MBps) [2024-11-09T16:28:59.985Z] Copying: 313/1024 [MB] (10 MBps) [2024-11-09T16:29:00.929Z] Copying: 323/1024 [MB] (10 MBps) [2024-11-09T16:29:01.874Z] Copying: 334/1024 [MB] (10 MBps) [2024-11-09T16:29:02.813Z] Copying: 344/1024 [MB] (10 MBps) [2024-11-09T16:29:04.194Z] Copying: 369/1024 [MB] (24 MBps) [2024-11-09T16:29:04.766Z] Copying: 406/1024 [MB] (37 MBps) [2024-11-09T16:29:06.153Z] Copying: 419/1024 [MB] (13 MBps) [2024-11-09T16:29:07.094Z] Copying: 437/1024 [MB] (17 MBps) [2024-11-09T16:29:08.036Z] Copying: 450/1024 [MB] (13 MBps) [2024-11-09T16:29:08.978Z] Copying: 469/1024 [MB] (18 MBps) [2024-11-09T16:29:09.967Z] Copying: 484/1024 [MB] (15 MBps) [2024-11-09T16:29:10.907Z] Copying: 500/1024 [MB] (15 MBps) [2024-11-09T16:29:11.852Z] Copying: 518/1024 [MB] (18 MBps) [2024-11-09T16:29:12.797Z] Copying: 541/1024 [MB] (22 MBps) [2024-11-09T16:29:14.185Z] Copying: 559/1024 [MB] (17 MBps) [2024-11-09T16:29:14.758Z] Copying: 575/1024 [MB] (15 MBps) [2024-11-09T16:29:16.146Z] Copying: 590/1024 [MB] (15 MBps) [2024-11-09T16:29:17.092Z] Copying: 603/1024 [MB] (13 MBps) [2024-11-09T16:29:18.037Z] Copying: 615/1024 [MB] (11 MBps) [2024-11-09T16:29:18.982Z] Copying: 628/1024 [MB] (13 MBps) [2024-11-09T16:29:19.926Z] Copying: 644/1024 [MB] (15 MBps) [2024-11-09T16:29:20.869Z] Copying: 659/1024 [MB] (15 MBps) [2024-11-09T16:29:21.809Z] Copying: 677/1024 [MB] (17 MBps) [2024-11-09T16:29:22.754Z] Copying: 694/1024 [MB] (17 MBps) [2024-11-09T16:29:24.144Z] Copying: 705/1024 [MB] (10 MBps) [2024-11-09T16:29:25.090Z] Copying: 721/1024 [MB] (16 MBps) [2024-11-09T16:29:26.034Z] Copying: 744/1024 [MB] (23 MBps) [2024-11-09T16:29:26.980Z] Copying: 765/1024 [MB] (20 MBps) [2024-11-09T16:29:27.971Z] Copying: 778/1024 [MB] (12 MBps) [2024-11-09T16:29:28.926Z] Copying: 790/1024 [MB] (12 MBps) [2024-11-09T16:29:29.869Z] Copying: 800/1024 [MB] (10 MBps) [2024-11-09T16:29:30.814Z] Copying: 812/1024 [MB] (11 MBps) [2024-11-09T16:29:31.759Z] Copying: 823/1024 [MB] (10 MBps) [2024-11-09T16:29:33.146Z] Copying: 835/1024 [MB] (12 MBps) [2024-11-09T16:29:34.090Z] Copying: 848/1024 [MB] (12 MBps) [2024-11-09T16:29:35.034Z] Copying: 859/1024 [MB] (10 MBps) [2024-11-09T16:29:35.979Z] Copying: 872/1024 [MB] (12 MBps) [2024-11-09T16:29:36.923Z] Copying: 884/1024 [MB] (12 MBps) [2024-11-09T16:29:37.869Z] Copying: 895/1024 [MB] (11 MBps) [2024-11-09T16:29:38.812Z] Copying: 906/1024 [MB] (11 MBps) [2024-11-09T16:29:39.760Z] Copying: 917/1024 [MB] (10 MBps) [2024-11-09T16:29:40.754Z] Copying: 928/1024 [MB] (11 MBps) [2024-11-09T16:29:42.139Z] Copying: 939/1024 [MB] (11 MBps) [2024-11-09T16:29:43.085Z] Copying: 950/1024 [MB] (11 MBps) [2024-11-09T16:29:44.029Z] Copying: 962/1024 [MB] (11 MBps) [2024-11-09T16:29:44.973Z] Copying: 973/1024 [MB] (11 MBps) [2024-11-09T16:29:45.920Z] Copying: 984/1024 [MB] (10 MBps) [2024-11-09T16:29:46.860Z] Copying: 995/1024 [MB] (10 MBps) [2024-11-09T16:29:47.121Z] Copying: 1022/1024 [MB] (27 MBps) [2024-11-09T16:29:47.121Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-09 16:29:47.099916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.351 [2024-11-09 16:29:47.100357] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:27.351 [2024-11-09 16:29:47.100553] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:27.351 [2024-11-09 16:29:47.100588] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.351 [2024-11-09 16:29:47.100659] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:27.351 [2024-11-09 16:29:47.107499] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.351 [2024-11-09 16:29:47.107588] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:27.351 [2024-11-09 16:29:47.107614] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.801 ms 00:20:27.351 [2024-11-09 16:29:47.107634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.351 [2024-11-09 16:29:47.108292] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.351 [2024-11-09 16:29:47.108332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:27.351 [2024-11-09 16:29:47.108355] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:20:27.351 [2024-11-09 16:29:47.108375] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.351 [2024-11-09 16:29:47.112603] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.351 [2024-11-09 16:29:47.112626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:27.351 [2024-11-09 16:29:47.112641] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.194 ms 00:20:27.351 [2024-11-09 16:29:47.112649] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.351 [2024-11-09 16:29:47.118807] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.613 [2024-11-09 16:29:47.118974] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:20:27.613 [2024-11-09 16:29:47.118995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.140 ms 00:20:27.613 [2024-11-09 16:29:47.119004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.613 [2024-11-09 16:29:47.146416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.613 [2024-11-09 16:29:47.146585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:27.613 [2024-11-09 16:29:47.146703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.334 ms 00:20:27.613 [2024-11-09 16:29:47.146727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.613 [2024-11-09 16:29:47.163404] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.613 [2024-11-09 16:29:47.163574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:27.613 [2024-11-09 16:29:47.163649] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.478 ms 00:20:27.613 [2024-11-09 16:29:47.163681] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.613 [2024-11-09 16:29:47.163951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.613 [2024-11-09 16:29:47.164083] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:27.613 [2024-11-09 16:29:47.164135] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.121 ms 00:20:27.613 [2024-11-09 16:29:47.164158] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.613 [2024-11-09 16:29:47.190386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.613 [2024-11-09 16:29:47.190550] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:20:27.613 [2024-11-09 16:29:47.190610] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.181 ms 00:20:27.613 [2024-11-09 16:29:47.190631] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.613 [2024-11-09 16:29:47.217610] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.613 [2024-11-09 16:29:47.217773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:20:27.613 [2024-11-09 16:29:47.217847] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.916 ms 00:20:27.613 [2024-11-09 16:29:47.217870] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.613 [2024-11-09 16:29:47.243117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.613 [2024-11-09 16:29:47.243299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:27.613 [2024-11-09 16:29:47.243365] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.201 ms 00:20:27.613 [2024-11-09 16:29:47.243386] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.613 [2024-11-09 16:29:47.268627] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.613 [2024-11-09 16:29:47.268783] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:27.613 [2024-11-09 16:29:47.268840] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.137 ms 00:20:27.613 [2024-11-09 16:29:47.268863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.613 [2024-11-09 16:29:47.268908] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:27.614 [2024-11-09 16:29:47.268944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.268977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.269878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.270350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.270974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.270999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:27.614 [2024-11-09 16:29:47.271438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:27.615 [2024-11-09 16:29:47.271642] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:27.615 [2024-11-09 16:29:47.271651] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f312cd53-be96-4ae1-a0bf-7bec45782f5f 00:20:27.615 [2024-11-09 16:29:47.271659] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:27.615 [2024-11-09 16:29:47.271667] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:27.615 [2024-11-09 16:29:47.271675] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:27.615 [2024-11-09 16:29:47.271683] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:27.615 [2024-11-09 16:29:47.271691] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:27.615 [2024-11-09 16:29:47.271700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:27.615 [2024-11-09 16:29:47.271708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:27.615 [2024-11-09 16:29:47.271725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:27.615 [2024-11-09 16:29:47.271733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:27.615 [2024-11-09 16:29:47.271743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.615 [2024-11-09 16:29:47.271752] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:27.615 [2024-11-09 16:29:47.271765] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.836 ms 00:20:27.615 [2024-11-09 16:29:47.271773] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.615 [2024-11-09 16:29:47.285395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.615 [2024-11-09 16:29:47.285443] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:27.615 [2024-11-09 16:29:47.285456] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.566 ms 00:20:27.615 [2024-11-09 16:29:47.285465] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.615 [2024-11-09 16:29:47.285701] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:27.615 [2024-11-09 16:29:47.285719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:27.615 [2024-11-09 16:29:47.285729] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:27.615 [2024-11-09 16:29:47.285738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.615 [2024-11-09 16:29:47.324856] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.615 [2024-11-09 16:29:47.324904] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:27.615 [2024-11-09 16:29:47.324916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.615 [2024-11-09 16:29:47.324924] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.615 [2024-11-09 16:29:47.324984] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.615 [2024-11-09 16:29:47.324999] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:27.615 [2024-11-09 16:29:47.325008] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.615 [2024-11-09 16:29:47.325016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.615 [2024-11-09 16:29:47.325090] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.615 [2024-11-09 16:29:47.325101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:27.615 [2024-11-09 16:29:47.325109] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.615 [2024-11-09 16:29:47.325118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.615 [2024-11-09 16:29:47.325134] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.615 [2024-11-09 16:29:47.325168] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:27.615 [2024-11-09 16:29:47.325182] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.615 [2024-11-09 16:29:47.325190] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.876 [2024-11-09 16:29:47.404806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.876 [2024-11-09 16:29:47.404863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:27.876 [2024-11-09 16:29:47.404877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.876 [2024-11-09 16:29:47.404886] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.876 [2024-11-09 16:29:47.437252] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.876 [2024-11-09 16:29:47.437296] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:27.876 [2024-11-09 16:29:47.437315] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.876 [2024-11-09 16:29:47.437323] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.876 [2024-11-09 16:29:47.437389] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.876 [2024-11-09 16:29:47.437399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:27.876 [2024-11-09 16:29:47.437408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.876 [2024-11-09 16:29:47.437416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.876 [2024-11-09 16:29:47.437461] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.876 [2024-11-09 16:29:47.437471] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:27.876 [2024-11-09 16:29:47.437480] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.876 [2024-11-09 16:29:47.437493] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.876 [2024-11-09 16:29:47.437591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.876 [2024-11-09 16:29:47.437602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:27.876 [2024-11-09 16:29:47.437612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.876 [2024-11-09 16:29:47.437621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.876 [2024-11-09 16:29:47.437651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.876 [2024-11-09 16:29:47.437660] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:27.876 [2024-11-09 16:29:47.437669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.876 [2024-11-09 16:29:47.437677] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.876 [2024-11-09 16:29:47.437723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.876 [2024-11-09 16:29:47.437733] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:27.876 [2024-11-09 16:29:47.437742] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.876 [2024-11-09 16:29:47.437750] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.876 [2024-11-09 16:29:47.437800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:27.876 [2024-11-09 16:29:47.437811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:27.876 [2024-11-09 16:29:47.437820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:27.876 [2024-11-09 16:29:47.437831] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:27.876 [2024-11-09 16:29:47.437961] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 338.047 ms, result 0 00:20:28.821 00:20:28.821 00:20:28.821 16:29:48 -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:30.739 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:30.739 16:29:50 -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:30.998 [2024-11-09 16:29:50.531333] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:20:30.998 [2024-11-09 16:29:50.531443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74763 ] 00:20:30.998 [2024-11-09 16:29:50.679412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.256 [2024-11-09 16:29:50.822419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.257 [2024-11-09 16:29:51.025422] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.257 [2024-11-09 16:29:51.025621] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:31.517 [2024-11-09 16:29:51.168196] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.517 [2024-11-09 16:29:51.168331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:31.517 [2024-11-09 16:29:51.168346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:31.517 [2024-11-09 16:29:51.168355] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.517 [2024-11-09 16:29:51.168393] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.517 [2024-11-09 16:29:51.168401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:31.517 [2024-11-09 16:29:51.168407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:31.517 [2024-11-09 16:29:51.168413] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.517 [2024-11-09 16:29:51.168428] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:31.517 [2024-11-09 16:29:51.168969] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:31.517 [2024-11-09 16:29:51.168979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.517 [2024-11-09 16:29:51.168986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:31.517 [2024-11-09 16:29:51.168993] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.555 ms 00:20:31.517 [2024-11-09 16:29:51.168998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.517 [2024-11-09 16:29:51.169938] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:31.517 [2024-11-09 16:29:51.179582] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.517 [2024-11-09 16:29:51.179697] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:31.517 [2024-11-09 16:29:51.179711] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.645 ms 00:20:31.517 [2024-11-09 16:29:51.179717] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.517 [2024-11-09 16:29:51.179756] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.517 [2024-11-09 16:29:51.179763] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:31.517 [2024-11-09 16:29:51.179769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:31.517 [2024-11-09 16:29:51.179775] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.517 [2024-11-09 16:29:51.184117] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.517 [2024-11-09 16:29:51.184141] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:31.517 [2024-11-09 16:29:51.184148] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.297 ms 00:20:31.517 [2024-11-09 16:29:51.184154] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.517 [2024-11-09 16:29:51.184215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.517 [2024-11-09 16:29:51.184239] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:31.517 [2024-11-09 16:29:51.184246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:31.517 [2024-11-09 16:29:51.184254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.517 [2024-11-09 16:29:51.184286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.517 [2024-11-09 16:29:51.184293] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:31.517 [2024-11-09 16:29:51.184299] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:31.517 [2024-11-09 16:29:51.184305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.517 [2024-11-09 16:29:51.184326] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:31.517 [2024-11-09 16:29:51.187037] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.517 [2024-11-09 16:29:51.187135] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:31.517 [2024-11-09 16:29:51.187147] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.720 ms 00:20:31.517 [2024-11-09 16:29:51.187153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.517 [2024-11-09 16:29:51.187180] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.517 [2024-11-09 16:29:51.187186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:31.517 [2024-11-09 16:29:51.187194] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:31.517 [2024-11-09 16:29:51.187199] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.517 [2024-11-09 16:29:51.187214] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:31.517 [2024-11-09 16:29:51.187240] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:20:31.517 [2024-11-09 16:29:51.187270] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:31.517 [2024-11-09 16:29:51.187281] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:20:31.517 [2024-11-09 16:29:51.187336] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:20:31.517 [2024-11-09 16:29:51.187346] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:31.517 [2024-11-09 16:29:51.187354] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:20:31.517 [2024-11-09 16:29:51.187361] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:31.517 [2024-11-09 16:29:51.187368] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:31.517 [2024-11-09 16:29:51.187374] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:31.517 [2024-11-09 16:29:51.187379] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:31.517 [2024-11-09 16:29:51.187384] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:20:31.517 [2024-11-09 16:29:51.187390] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:20:31.517 [2024-11-09 16:29:51.187395] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.517 [2024-11-09 16:29:51.187401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:31.517 [2024-11-09 16:29:51.187407] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.183 ms 00:20:31.517 [2024-11-09 16:29:51.187414] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.517 [2024-11-09 16:29:51.187459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.517 [2024-11-09 16:29:51.187465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:31.517 [2024-11-09 16:29:51.187470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:31.517 [2024-11-09 16:29:51.187475] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.517 [2024-11-09 16:29:51.187527] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:31.517 [2024-11-09 16:29:51.187534] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:31.517 [2024-11-09 16:29:51.187541] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.517 [2024-11-09 16:29:51.187546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.517 [2024-11-09 16:29:51.187554] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:31.517 [2024-11-09 16:29:51.187559] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:31.517 [2024-11-09 16:29:51.187564] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:31.517 [2024-11-09 16:29:51.187569] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:31.517 [2024-11-09 16:29:51.187574] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:31.517 [2024-11-09 16:29:51.187580] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.517 [2024-11-09 16:29:51.187585] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:31.517 [2024-11-09 16:29:51.187590] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:31.517 [2024-11-09 16:29:51.187595] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:31.517 [2024-11-09 16:29:51.187600] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:31.517 [2024-11-09 16:29:51.187605] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:20:31.517 [2024-11-09 16:29:51.187610] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.517 [2024-11-09 16:29:51.187619] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:31.517 [2024-11-09 16:29:51.187624] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:20:31.517 [2024-11-09 16:29:51.187629] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.517 [2024-11-09 16:29:51.187634] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:20:31.517 [2024-11-09 16:29:51.187639] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:20:31.517 [2024-11-09 16:29:51.187644] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:20:31.517 [2024-11-09 16:29:51.187649] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:31.517 [2024-11-09 16:29:51.187654] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:31.517 [2024-11-09 16:29:51.187659] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:31.517 [2024-11-09 16:29:51.187664] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:31.518 [2024-11-09 16:29:51.187668] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:20:31.518 [2024-11-09 16:29:51.187673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:31.518 [2024-11-09 16:29:51.187678] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:31.518 [2024-11-09 16:29:51.187683] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:31.518 [2024-11-09 16:29:51.187688] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:31.518 [2024-11-09 16:29:51.187692] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:31.518 [2024-11-09 16:29:51.187697] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:20:31.518 [2024-11-09 16:29:51.187702] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:20:31.518 [2024-11-09 16:29:51.187707] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:31.518 [2024-11-09 16:29:51.187711] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:31.518 [2024-11-09 16:29:51.187716] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.518 [2024-11-09 16:29:51.187721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:31.518 [2024-11-09 16:29:51.187726] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:20:31.518 [2024-11-09 16:29:51.187731] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:31.518 [2024-11-09 16:29:51.187735] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:31.518 [2024-11-09 16:29:51.187741] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:31.518 [2024-11-09 16:29:51.187746] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:31.518 [2024-11-09 16:29:51.187751] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:31.518 [2024-11-09 16:29:51.187757] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:31.518 [2024-11-09 16:29:51.187763] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:31.518 [2024-11-09 16:29:51.187769] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:31.518 [2024-11-09 16:29:51.187774] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:31.518 [2024-11-09 16:29:51.187779] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:31.518 [2024-11-09 16:29:51.187785] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:31.518 [2024-11-09 16:29:51.187790] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:31.518 [2024-11-09 16:29:51.187797] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.518 [2024-11-09 16:29:51.187803] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:31.518 [2024-11-09 16:29:51.187809] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:20:31.518 [2024-11-09 16:29:51.187814] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:20:31.518 [2024-11-09 16:29:51.187820] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:20:31.518 [2024-11-09 16:29:51.187825] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:20:31.518 [2024-11-09 16:29:51.187830] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:20:31.518 [2024-11-09 16:29:51.187835] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:20:31.518 [2024-11-09 16:29:51.187840] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:20:31.518 [2024-11-09 16:29:51.187846] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:20:31.518 [2024-11-09 16:29:51.187851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:20:31.518 [2024-11-09 16:29:51.187856] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:20:31.518 [2024-11-09 16:29:51.187862] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:20:31.518 [2024-11-09 16:29:51.187868] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:20:31.518 [2024-11-09 16:29:51.187873] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:31.518 [2024-11-09 16:29:51.187879] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:31.518 [2024-11-09 16:29:51.187885] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:31.518 [2024-11-09 16:29:51.187891] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:31.518 [2024-11-09 16:29:51.187896] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:31.518 [2024-11-09 16:29:51.187902] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:31.518 [2024-11-09 16:29:51.187908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.518 [2024-11-09 16:29:51.187914] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:31.518 [2024-11-09 16:29:51.187919] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.413 ms 00:20:31.518 [2024-11-09 16:29:51.187926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.518 [2024-11-09 16:29:51.199913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.518 [2024-11-09 16:29:51.200002] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:31.518 [2024-11-09 16:29:51.200044] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.959 ms 00:20:31.518 [2024-11-09 16:29:51.200065] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.518 [2024-11-09 16:29:51.200137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.518 [2024-11-09 16:29:51.200186] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:31.518 [2024-11-09 16:29:51.200204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:31.518 [2024-11-09 16:29:51.200218] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.518 [2024-11-09 16:29:51.236932] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.518 [2024-11-09 16:29:51.237042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:31.518 [2024-11-09 16:29:51.237089] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.637 ms 00:20:31.518 [2024-11-09 16:29:51.237108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.518 [2024-11-09 16:29:51.237164] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.518 [2024-11-09 16:29:51.237184] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:31.518 [2024-11-09 16:29:51.237199] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.518 [2024-11-09 16:29:51.237213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.518 [2024-11-09 16:29:51.237539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.518 [2024-11-09 16:29:51.237578] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:31.518 [2024-11-09 16:29:51.237598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:20:31.518 [2024-11-09 16:29:51.237612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.518 [2024-11-09 16:29:51.237710] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.518 [2024-11-09 16:29:51.237732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:31.518 [2024-11-09 16:29:51.237748] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:31.518 [2024-11-09 16:29:51.237762] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.518 [2024-11-09 16:29:51.248769] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.518 [2024-11-09 16:29:51.248855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:31.518 [2024-11-09 16:29:51.248891] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.949 ms 00:20:31.518 [2024-11-09 16:29:51.248908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.518 [2024-11-09 16:29:51.258482] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:31.518 [2024-11-09 16:29:51.258579] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:31.518 [2024-11-09 16:29:51.258623] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.518 [2024-11-09 16:29:51.258639] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:31.518 [2024-11-09 16:29:51.258655] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.620 ms 00:20:31.518 [2024-11-09 16:29:51.258669] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.518 [2024-11-09 16:29:51.277301] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.518 [2024-11-09 16:29:51.277388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:31.518 [2024-11-09 16:29:51.277427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.600 ms 00:20:31.518 [2024-11-09 16:29:51.277444] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.777 [2024-11-09 16:29:51.286954] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.777 [2024-11-09 16:29:51.287059] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:31.777 [2024-11-09 16:29:51.287106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.196 ms 00:20:31.777 [2024-11-09 16:29:51.287124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.777 [2024-11-09 16:29:51.295840] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.777 [2024-11-09 16:29:51.295934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:31.777 [2024-11-09 16:29:51.295977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.684 ms 00:20:31.777 [2024-11-09 16:29:51.295994] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.777 [2024-11-09 16:29:51.296283] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.777 [2024-11-09 16:29:51.296350] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:31.777 [2024-11-09 16:29:51.296389] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:20:31.777 [2024-11-09 16:29:51.296406] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.777 [2024-11-09 16:29:51.341826] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.777 [2024-11-09 16:29:51.341947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:31.777 [2024-11-09 16:29:51.341988] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.396 ms 00:20:31.777 [2024-11-09 16:29:51.342005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.777 [2024-11-09 16:29:51.350098] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:31.777 [2024-11-09 16:29:51.351917] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.777 [2024-11-09 16:29:51.351997] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:31.777 [2024-11-09 16:29:51.352046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.878 ms 00:20:31.777 [2024-11-09 16:29:51.352064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.777 [2024-11-09 16:29:51.352121] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.777 [2024-11-09 16:29:51.352192] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:31.777 [2024-11-09 16:29:51.352243] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:31.777 [2024-11-09 16:29:51.352259] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.777 [2024-11-09 16:29:51.352317] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.777 [2024-11-09 16:29:51.352336] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:31.777 [2024-11-09 16:29:51.352351] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:31.777 [2024-11-09 16:29:51.352369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.777 [2024-11-09 16:29:51.353344] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.777 [2024-11-09 16:29:51.353424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:20:31.777 [2024-11-09 16:29:51.353461] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:20:31.777 [2024-11-09 16:29:51.353477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.777 [2024-11-09 16:29:51.353506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.777 [2024-11-09 16:29:51.353527] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:31.777 [2024-11-09 16:29:51.353578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:31.777 [2024-11-09 16:29:51.353595] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.777 [2024-11-09 16:29:51.353631] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:31.777 [2024-11-09 16:29:51.353651] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.777 [2024-11-09 16:29:51.353688] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:31.777 [2024-11-09 16:29:51.353725] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:31.777 [2024-11-09 16:29:51.353742] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.777 [2024-11-09 16:29:51.371774] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.777 [2024-11-09 16:29:51.371802] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:31.777 [2024-11-09 16:29:51.371811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.990 ms 00:20:31.778 [2024-11-09 16:29:51.371821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.778 [2024-11-09 16:29:51.371872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:31.778 [2024-11-09 16:29:51.371879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:31.778 [2024-11-09 16:29:51.371885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:31.778 [2024-11-09 16:29:51.371891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:31.778 [2024-11-09 16:29:51.372599] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 204.081 ms, result 0 00:20:32.721  [2024-11-09T16:29:53.436Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-09T16:29:54.824Z] Copying: 39/1024 [MB] (12 MBps) [2024-11-09T16:29:55.406Z] Copying: 54/1024 [MB] (15 MBps) [2024-11-09T16:29:56.787Z] Copying: 74/1024 [MB] (19 MBps) [2024-11-09T16:29:57.721Z] Copying: 86/1024 [MB] (12 MBps) [2024-11-09T16:29:58.665Z] Copying: 113/1024 [MB] (27 MBps) [2024-11-09T16:29:59.609Z] Copying: 138/1024 [MB] (25 MBps) [2024-11-09T16:30:00.544Z] Copying: 154/1024 [MB] (15 MBps) [2024-11-09T16:30:01.484Z] Copying: 197/1024 [MB] (42 MBps) [2024-11-09T16:30:02.424Z] Copying: 214/1024 [MB] (17 MBps) [2024-11-09T16:30:03.811Z] Copying: 230/1024 [MB] (16 MBps) [2024-11-09T16:30:04.388Z] Copying: 243/1024 [MB] (12 MBps) [2024-11-09T16:30:05.773Z] Copying: 254/1024 [MB] (10 MBps) [2024-11-09T16:30:06.716Z] Copying: 275/1024 [MB] (21 MBps) [2024-11-09T16:30:07.659Z] Copying: 297/1024 [MB] (22 MBps) [2024-11-09T16:30:08.603Z] Copying: 318/1024 [MB] (20 MBps) [2024-11-09T16:30:09.560Z] Copying: 332/1024 [MB] (14 MBps) [2024-11-09T16:30:10.507Z] Copying: 350/1024 [MB] (17 MBps) [2024-11-09T16:30:11.450Z] Copying: 366/1024 [MB] (16 MBps) [2024-11-09T16:30:12.394Z] Copying: 379/1024 [MB] (12 MBps) [2024-11-09T16:30:13.782Z] Copying: 395/1024 [MB] (16 MBps) [2024-11-09T16:30:14.728Z] Copying: 405/1024 [MB] (10 MBps) [2024-11-09T16:30:15.666Z] Copying: 417/1024 [MB] (11 MBps) [2024-11-09T16:30:16.611Z] Copying: 428/1024 [MB] (11 MBps) [2024-11-09T16:30:17.553Z] Copying: 444/1024 [MB] (15 MBps) [2024-11-09T16:30:18.493Z] Copying: 454/1024 [MB] (10 MBps) [2024-11-09T16:30:19.435Z] Copying: 470/1024 [MB] (15 MBps) [2024-11-09T16:30:20.823Z] Copying: 484/1024 [MB] (14 MBps) [2024-11-09T16:30:21.389Z] Copying: 494/1024 [MB] (10 MBps) [2024-11-09T16:30:22.761Z] Copying: 518/1024 [MB] (23 MBps) [2024-11-09T16:30:23.695Z] Copying: 541/1024 [MB] (23 MBps) [2024-11-09T16:30:24.661Z] Copying: 584/1024 [MB] (42 MBps) [2024-11-09T16:30:25.606Z] Copying: 598/1024 [MB] (14 MBps) [2024-11-09T16:30:26.550Z] Copying: 610/1024 [MB] (12 MBps) [2024-11-09T16:30:27.490Z] Copying: 621/1024 [MB] (10 MBps) [2024-11-09T16:30:28.423Z] Copying: 633/1024 [MB] (12 MBps) [2024-11-09T16:30:29.803Z] Copying: 657/1024 [MB] (24 MBps) [2024-11-09T16:30:30.741Z] Copying: 680/1024 [MB] (23 MBps) [2024-11-09T16:30:31.682Z] Copying: 702/1024 [MB] (22 MBps) [2024-11-09T16:30:32.631Z] Copying: 715/1024 [MB] (12 MBps) [2024-11-09T16:30:33.563Z] Copying: 736/1024 [MB] (21 MBps) [2024-11-09T16:30:34.497Z] Copying: 761/1024 [MB] (24 MBps) [2024-11-09T16:30:35.429Z] Copying: 782/1024 [MB] (21 MBps) [2024-11-09T16:30:36.804Z] Copying: 803/1024 [MB] (21 MBps) [2024-11-09T16:30:37.741Z] Copying: 824/1024 [MB] (21 MBps) [2024-11-09T16:30:38.676Z] Copying: 840/1024 [MB] (15 MBps) [2024-11-09T16:30:39.608Z] Copying: 862/1024 [MB] (22 MBps) [2024-11-09T16:30:40.539Z] Copying: 884/1024 [MB] (22 MBps) [2024-11-09T16:30:41.480Z] Copying: 924/1024 [MB] (39 MBps) [2024-11-09T16:30:42.414Z] Copying: 946/1024 [MB] (22 MBps) [2024-11-09T16:30:43.786Z] Copying: 971/1024 [MB] (24 MBps) [2024-11-09T16:30:44.718Z] Copying: 992/1024 [MB] (21 MBps) [2024-11-09T16:30:45.656Z] Copying: 1013/1024 [MB] (21 MBps) [2024-11-09T16:30:45.656Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-09T16:30:45.656Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-09 16:30:45.625077] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.886 [2024-11-09 16:30:45.625610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:25.886 [2024-11-09 16:30:45.625668] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:25.886 [2024-11-09 16:30:45.625695] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:25.886 [2024-11-09 16:30:45.637467] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:25.886 [2024-11-09 16:30:45.643277] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:25.886 [2024-11-09 16:30:45.643339] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:25.886 [2024-11-09 16:30:45.643353] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.700 ms 00:21:25.886 [2024-11-09 16:30:45.643362] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.148 [2024-11-09 16:30:45.655827] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.148 [2024-11-09 16:30:45.656054] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:26.148 [2024-11-09 16:30:45.656079] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.127 ms 00:21:26.148 [2024-11-09 16:30:45.656088] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.148 [2024-11-09 16:30:45.676536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.148 [2024-11-09 16:30:45.676583] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:26.148 [2024-11-09 16:30:45.676596] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.421 ms 00:21:26.148 [2024-11-09 16:30:45.676604] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.148 [2024-11-09 16:30:45.682690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.148 [2024-11-09 16:30:45.682861] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:21:26.148 [2024-11-09 16:30:45.682890] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.054 ms 00:21:26.148 [2024-11-09 16:30:45.682899] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.148 [2024-11-09 16:30:45.709913] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.148 [2024-11-09 16:30:45.710093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:26.148 [2024-11-09 16:30:45.710114] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.943 ms 00:21:26.148 [2024-11-09 16:30:45.710121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.148 [2024-11-09 16:30:45.726874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.148 [2024-11-09 16:30:45.726931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:26.148 [2024-11-09 16:30:45.726946] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.436 ms 00:21:26.148 [2024-11-09 16:30:45.726955] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.148 [2024-11-09 16:30:45.855463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.148 [2024-11-09 16:30:45.855517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:26.148 [2024-11-09 16:30:45.855532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 128.452 ms 00:21:26.148 [2024-11-09 16:30:45.855548] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.148 [2024-11-09 16:30:45.882380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.148 [2024-11-09 16:30:45.882429] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:21:26.148 [2024-11-09 16:30:45.882442] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.814 ms 00:21:26.148 [2024-11-09 16:30:45.882449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.148 [2024-11-09 16:30:45.908872] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.148 [2024-11-09 16:30:45.909107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:21:26.148 [2024-11-09 16:30:45.909157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.371 ms 00:21:26.148 [2024-11-09 16:30:45.909166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.411 [2024-11-09 16:30:45.935575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.411 [2024-11-09 16:30:45.935795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:26.411 [2024-11-09 16:30:45.935820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.068 ms 00:21:26.411 [2024-11-09 16:30:45.935828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.411 [2024-11-09 16:30:45.961894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.411 [2024-11-09 16:30:45.961945] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:26.411 [2024-11-09 16:30:45.961958] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.958 ms 00:21:26.411 [2024-11-09 16:30:45.961965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.411 [2024-11-09 16:30:45.962014] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:26.411 [2024-11-09 16:30:45.962030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 93696 / 261120 wr_cnt: 1 state: open 00:21:26.411 [2024-11-09 16:30:45.962042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:26.411 [2024-11-09 16:30:45.962633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:26.412 [2024-11-09 16:30:45.962929] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:26.412 [2024-11-09 16:30:45.962937] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f312cd53-be96-4ae1-a0bf-7bec45782f5f 00:21:26.412 [2024-11-09 16:30:45.962949] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 93696 00:21:26.412 [2024-11-09 16:30:45.962956] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 94656 00:21:26.412 [2024-11-09 16:30:45.962967] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 93696 00:21:26.412 [2024-11-09 16:30:45.962977] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0102 00:21:26.412 [2024-11-09 16:30:45.962984] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:26.412 [2024-11-09 16:30:45.962992] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:26.412 [2024-11-09 16:30:45.963000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:26.412 [2024-11-09 16:30:45.963015] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:26.412 [2024-11-09 16:30:45.963022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:26.412 [2024-11-09 16:30:45.963030] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.412 [2024-11-09 16:30:45.963038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:26.412 [2024-11-09 16:30:45.963047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:21:26.412 [2024-11-09 16:30:45.963054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:45.976905] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.412 [2024-11-09 16:30:45.976950] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:26.412 [2024-11-09 16:30:45.976963] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.800 ms 00:21:26.412 [2024-11-09 16:30:45.976971] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:45.977206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.412 [2024-11-09 16:30:45.977217] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:26.412 [2024-11-09 16:30:45.977252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.211 ms 00:21:26.412 [2024-11-09 16:30:45.977261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:46.016513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.412 [2024-11-09 16:30:46.016712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:26.412 [2024-11-09 16:30:46.016734] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.412 [2024-11-09 16:30:46.016743] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:46.016813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.412 [2024-11-09 16:30:46.016823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:26.412 [2024-11-09 16:30:46.016832] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.412 [2024-11-09 16:30:46.016841] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:46.016931] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.412 [2024-11-09 16:30:46.016944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:26.412 [2024-11-09 16:30:46.016953] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.412 [2024-11-09 16:30:46.016961] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:46.016978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.412 [2024-11-09 16:30:46.016987] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:26.412 [2024-11-09 16:30:46.016995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.412 [2024-11-09 16:30:46.017002] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:46.099524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.412 [2024-11-09 16:30:46.099748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:26.412 [2024-11-09 16:30:46.099769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.412 [2024-11-09 16:30:46.099779] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:46.132999] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.412 [2024-11-09 16:30:46.133047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:26.412 [2024-11-09 16:30:46.133059] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.412 [2024-11-09 16:30:46.133068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:46.133152] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.412 [2024-11-09 16:30:46.133169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:26.412 [2024-11-09 16:30:46.133179] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.412 [2024-11-09 16:30:46.133187] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:46.133254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.412 [2024-11-09 16:30:46.133266] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:26.412 [2024-11-09 16:30:46.133275] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.412 [2024-11-09 16:30:46.133283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:46.133392] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.412 [2024-11-09 16:30:46.133406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:26.412 [2024-11-09 16:30:46.133418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.412 [2024-11-09 16:30:46.133426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:46.133458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.412 [2024-11-09 16:30:46.133467] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:26.412 [2024-11-09 16:30:46.133476] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.412 [2024-11-09 16:30:46.133484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:46.133529] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.412 [2024-11-09 16:30:46.133539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:26.412 [2024-11-09 16:30:46.133551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.412 [2024-11-09 16:30:46.133560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.412 [2024-11-09 16:30:46.133609] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:26.412 [2024-11-09 16:30:46.133626] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:26.413 [2024-11-09 16:30:46.133634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:26.413 [2024-11-09 16:30:46.133642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.413 [2024-11-09 16:30:46.133772] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 510.124 ms, result 0 00:21:28.330 00:21:28.330 00:21:28.330 16:30:47 -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:28.330 [2024-11-09 16:30:47.774218] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:21:28.330 [2024-11-09 16:30:47.774392] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75357 ] 00:21:28.330 [2024-11-09 16:30:47.926255] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.591 [2024-11-09 16:30:48.147687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:28.853 [2024-11-09 16:30:48.436755] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:28.853 [2024-11-09 16:30:48.436831] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:28.853 [2024-11-09 16:30:48.595282] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.853 [2024-11-09 16:30:48.595341] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:28.853 [2024-11-09 16:30:48.595357] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:28.853 [2024-11-09 16:30:48.595369] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.853 [2024-11-09 16:30:48.595428] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.853 [2024-11-09 16:30:48.595440] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:28.853 [2024-11-09 16:30:48.595449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:28.853 [2024-11-09 16:30:48.595457] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.853 [2024-11-09 16:30:48.595477] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:28.853 [2024-11-09 16:30:48.596257] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:28.853 [2024-11-09 16:30:48.596281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.853 [2024-11-09 16:30:48.596290] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:28.853 [2024-11-09 16:30:48.596300] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.809 ms 00:21:28.853 [2024-11-09 16:30:48.596308] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.853 [2024-11-09 16:30:48.598070] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:28.853 [2024-11-09 16:30:48.613015] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.853 [2024-11-09 16:30:48.613077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:28.853 [2024-11-09 16:30:48.613092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.946 ms 00:21:28.853 [2024-11-09 16:30:48.613100] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.853 [2024-11-09 16:30:48.613201] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.853 [2024-11-09 16:30:48.613213] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:28.853 [2024-11-09 16:30:48.613244] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:21:28.853 [2024-11-09 16:30:48.613253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.117 [2024-11-09 16:30:48.621666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.117 [2024-11-09 16:30:48.621712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:29.117 [2024-11-09 16:30:48.621724] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.331 ms 00:21:29.117 [2024-11-09 16:30:48.621732] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.117 [2024-11-09 16:30:48.621828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.117 [2024-11-09 16:30:48.621839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:29.117 [2024-11-09 16:30:48.621848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:21:29.117 [2024-11-09 16:30:48.621857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.117 [2024-11-09 16:30:48.621906] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.117 [2024-11-09 16:30:48.621916] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:29.117 [2024-11-09 16:30:48.621924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:29.117 [2024-11-09 16:30:48.621932] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.117 [2024-11-09 16:30:48.621963] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:29.117 [2024-11-09 16:30:48.626289] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.117 [2024-11-09 16:30:48.626330] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:29.117 [2024-11-09 16:30:48.626341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.339 ms 00:21:29.117 [2024-11-09 16:30:48.626350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.117 [2024-11-09 16:30:48.626388] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.117 [2024-11-09 16:30:48.626398] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:29.117 [2024-11-09 16:30:48.626406] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:29.117 [2024-11-09 16:30:48.626417] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.117 [2024-11-09 16:30:48.626468] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:29.117 [2024-11-09 16:30:48.626491] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:21:29.117 [2024-11-09 16:30:48.626527] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:29.117 [2024-11-09 16:30:48.626543] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:21:29.117 [2024-11-09 16:30:48.626618] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:21:29.117 [2024-11-09 16:30:48.626629] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:29.117 [2024-11-09 16:30:48.626643] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:21:29.117 [2024-11-09 16:30:48.626654] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:29.117 [2024-11-09 16:30:48.626663] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:29.117 [2024-11-09 16:30:48.626672] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:29.117 [2024-11-09 16:30:48.626680] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:29.117 [2024-11-09 16:30:48.626689] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:21:29.117 [2024-11-09 16:30:48.626696] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:21:29.117 [2024-11-09 16:30:48.626704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.117 [2024-11-09 16:30:48.626712] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:29.117 [2024-11-09 16:30:48.626720] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.239 ms 00:21:29.117 [2024-11-09 16:30:48.626727] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.117 [2024-11-09 16:30:48.626790] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.117 [2024-11-09 16:30:48.626806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:29.117 [2024-11-09 16:30:48.626814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:29.117 [2024-11-09 16:30:48.626821] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.117 [2024-11-09 16:30:48.626892] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:29.117 [2024-11-09 16:30:48.626902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:29.117 [2024-11-09 16:30:48.626911] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:29.117 [2024-11-09 16:30:48.626919] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.117 [2024-11-09 16:30:48.626927] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:29.117 [2024-11-09 16:30:48.626934] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:29.117 [2024-11-09 16:30:48.626941] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:29.117 [2024-11-09 16:30:48.626948] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:29.117 [2024-11-09 16:30:48.626956] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:29.117 [2024-11-09 16:30:48.626962] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:29.117 [2024-11-09 16:30:48.626969] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:29.117 [2024-11-09 16:30:48.626975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:29.117 [2024-11-09 16:30:48.626984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:29.117 [2024-11-09 16:30:48.626991] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:29.117 [2024-11-09 16:30:48.626998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:21:29.117 [2024-11-09 16:30:48.627004] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.117 [2024-11-09 16:30:48.627019] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:29.117 [2024-11-09 16:30:48.627026] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:21:29.117 [2024-11-09 16:30:48.627032] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.117 [2024-11-09 16:30:48.627039] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:21:29.117 [2024-11-09 16:30:48.627046] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:21:29.117 [2024-11-09 16:30:48.627052] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:21:29.117 [2024-11-09 16:30:48.627059] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:29.117 [2024-11-09 16:30:48.627065] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:29.117 [2024-11-09 16:30:48.627072] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:29.117 [2024-11-09 16:30:48.627078] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:29.117 [2024-11-09 16:30:48.627085] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:21:29.117 [2024-11-09 16:30:48.627091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:29.117 [2024-11-09 16:30:48.627097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:29.117 [2024-11-09 16:30:48.627104] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:29.117 [2024-11-09 16:30:48.627111] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:29.117 [2024-11-09 16:30:48.627117] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:29.117 [2024-11-09 16:30:48.627124] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:21:29.117 [2024-11-09 16:30:48.627131] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:21:29.117 [2024-11-09 16:30:48.627137] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:29.117 [2024-11-09 16:30:48.627144] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:29.117 [2024-11-09 16:30:48.627150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:29.117 [2024-11-09 16:30:48.627157] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:29.117 [2024-11-09 16:30:48.627163] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:21:29.117 [2024-11-09 16:30:48.627169] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:29.118 [2024-11-09 16:30:48.627175] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:29.118 [2024-11-09 16:30:48.627185] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:29.118 [2024-11-09 16:30:48.627193] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:29.118 [2024-11-09 16:30:48.627201] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:29.118 [2024-11-09 16:30:48.627212] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:29.118 [2024-11-09 16:30:48.627219] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:29.118 [2024-11-09 16:30:48.627256] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:29.118 [2024-11-09 16:30:48.627264] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:29.118 [2024-11-09 16:30:48.627271] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:29.118 [2024-11-09 16:30:48.627278] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:29.118 [2024-11-09 16:30:48.627286] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:29.118 [2024-11-09 16:30:48.627298] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:29.118 [2024-11-09 16:30:48.627306] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:29.118 [2024-11-09 16:30:48.627315] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:21:29.118 [2024-11-09 16:30:48.627323] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:21:29.118 [2024-11-09 16:30:48.627331] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:21:29.118 [2024-11-09 16:30:48.627338] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:21:29.118 [2024-11-09 16:30:48.627346] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:21:29.118 [2024-11-09 16:30:48.627354] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:21:29.118 [2024-11-09 16:30:48.627362] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:21:29.118 [2024-11-09 16:30:48.627369] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:21:29.118 [2024-11-09 16:30:48.627377] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:21:29.118 [2024-11-09 16:30:48.627384] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:21:29.118 [2024-11-09 16:30:48.627392] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:21:29.118 [2024-11-09 16:30:48.627402] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:21:29.118 [2024-11-09 16:30:48.627409] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:29.118 [2024-11-09 16:30:48.627417] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:29.118 [2024-11-09 16:30:48.627425] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:29.118 [2024-11-09 16:30:48.627432] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:29.118 [2024-11-09 16:30:48.627440] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:29.118 [2024-11-09 16:30:48.627447] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:29.118 [2024-11-09 16:30:48.627455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.627463] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:29.118 [2024-11-09 16:30:48.627470] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.607 ms 00:21:29.118 [2024-11-09 16:30:48.627478] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.645744] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.645796] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:29.118 [2024-11-09 16:30:48.645809] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.221 ms 00:21:29.118 [2024-11-09 16:30:48.645824] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.645919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.645928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:29.118 [2024-11-09 16:30:48.645938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:29.118 [2024-11-09 16:30:48.645945] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.692552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.692773] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:29.118 [2024-11-09 16:30:48.692796] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.549 ms 00:21:29.118 [2024-11-09 16:30:48.692805] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.692857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.692868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:29.118 [2024-11-09 16:30:48.692877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:29.118 [2024-11-09 16:30:48.692885] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.693523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.693559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:29.118 [2024-11-09 16:30:48.693570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:21:29.118 [2024-11-09 16:30:48.693585] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.693714] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.693724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:29.118 [2024-11-09 16:30:48.693733] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:21:29.118 [2024-11-09 16:30:48.693741] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.710438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.710485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:29.118 [2024-11-09 16:30:48.710497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.672 ms 00:21:29.118 [2024-11-09 16:30:48.710505] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.724785] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:29.118 [2024-11-09 16:30:48.724836] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:29.118 [2024-11-09 16:30:48.724848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.724856] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:29.118 [2024-11-09 16:30:48.724867] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.233 ms 00:21:29.118 [2024-11-09 16:30:48.724874] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.750825] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.750879] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:29.118 [2024-11-09 16:30:48.750892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.895 ms 00:21:29.118 [2024-11-09 16:30:48.750900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.764069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.764114] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:29.118 [2024-11-09 16:30:48.764126] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.113 ms 00:21:29.118 [2024-11-09 16:30:48.764133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.776778] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.776826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:29.118 [2024-11-09 16:30:48.776848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.596 ms 00:21:29.118 [2024-11-09 16:30:48.776855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.777291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.777308] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:29.118 [2024-11-09 16:30:48.777319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:21:29.118 [2024-11-09 16:30:48.777327] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.118 [2024-11-09 16:30:48.845375] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.118 [2024-11-09 16:30:48.845434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:29.118 [2024-11-09 16:30:48.845450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.029 ms 00:21:29.119 [2024-11-09 16:30:48.845459] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.119 [2024-11-09 16:30:48.857120] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:29.119 [2024-11-09 16:30:48.860324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.119 [2024-11-09 16:30:48.860367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:29.119 [2024-11-09 16:30:48.860380] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.802 ms 00:21:29.119 [2024-11-09 16:30:48.860395] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.119 [2024-11-09 16:30:48.860468] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.119 [2024-11-09 16:30:48.860479] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:29.119 [2024-11-09 16:30:48.860487] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:29.119 [2024-11-09 16:30:48.860495] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.119 [2024-11-09 16:30:48.861912] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.119 [2024-11-09 16:30:48.861962] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:29.119 [2024-11-09 16:30:48.861973] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.379 ms 00:21:29.119 [2024-11-09 16:30:48.861982] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.119 [2024-11-09 16:30:48.863386] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.119 [2024-11-09 16:30:48.863423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:21:29.119 [2024-11-09 16:30:48.863433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.370 ms 00:21:29.119 [2024-11-09 16:30:48.863440] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.119 [2024-11-09 16:30:48.863476] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.119 [2024-11-09 16:30:48.863485] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:29.119 [2024-11-09 16:30:48.863500] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:29.119 [2024-11-09 16:30:48.863508] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.119 [2024-11-09 16:30:48.863545] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:29.119 [2024-11-09 16:30:48.863555] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.119 [2024-11-09 16:30:48.863566] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:29.119 [2024-11-09 16:30:48.863574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:29.119 [2024-11-09 16:30:48.863582] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.381 [2024-11-09 16:30:48.889754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.381 [2024-11-09 16:30:48.889947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:29.381 [2024-11-09 16:30:48.889971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.152 ms 00:21:29.381 [2024-11-09 16:30:48.889980] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.381 [2024-11-09 16:30:48.890064] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.381 [2024-11-09 16:30:48.890074] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:29.381 [2024-11-09 16:30:48.890083] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:21:29.381 [2024-11-09 16:30:48.890091] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.381 [2024-11-09 16:30:48.896357] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 298.466 ms, result 0 00:21:30.327  [2024-11-09T16:30:51.484Z] Copying: 12/1024 [MB] (12 MBps) [2024-11-09T16:30:52.475Z] Copying: 31/1024 [MB] (19 MBps) [2024-11-09T16:30:53.415Z] Copying: 42/1024 [MB] (10 MBps) [2024-11-09T16:30:54.360Z] Copying: 69/1024 [MB] (27 MBps) [2024-11-09T16:30:55.305Z] Copying: 90/1024 [MB] (20 MBps) [2024-11-09T16:30:56.253Z] Copying: 112/1024 [MB] (21 MBps) [2024-11-09T16:30:57.196Z] Copying: 133/1024 [MB] (21 MBps) [2024-11-09T16:30:58.142Z] Copying: 150/1024 [MB] (16 MBps) [2024-11-09T16:30:59.095Z] Copying: 165/1024 [MB] (15 MBps) [2024-11-09T16:31:00.482Z] Copying: 179/1024 [MB] (13 MBps) [2024-11-09T16:31:01.425Z] Copying: 196/1024 [MB] (16 MBps) [2024-11-09T16:31:02.370Z] Copying: 215/1024 [MB] (19 MBps) [2024-11-09T16:31:03.315Z] Copying: 234/1024 [MB] (18 MBps) [2024-11-09T16:31:04.262Z] Copying: 252/1024 [MB] (18 MBps) [2024-11-09T16:31:05.209Z] Copying: 269/1024 [MB] (16 MBps) [2024-11-09T16:31:06.155Z] Copying: 279/1024 [MB] (10 MBps) [2024-11-09T16:31:07.103Z] Copying: 290/1024 [MB] (10 MBps) [2024-11-09T16:31:08.489Z] Copying: 302/1024 [MB] (12 MBps) [2024-11-09T16:31:09.434Z] Copying: 314/1024 [MB] (11 MBps) [2024-11-09T16:31:10.380Z] Copying: 324/1024 [MB] (10 MBps) [2024-11-09T16:31:11.327Z] Copying: 335/1024 [MB] (10 MBps) [2024-11-09T16:31:12.274Z] Copying: 346/1024 [MB] (10 MBps) [2024-11-09T16:31:13.222Z] Copying: 356/1024 [MB] (10 MBps) [2024-11-09T16:31:14.169Z] Copying: 367/1024 [MB] (10 MBps) [2024-11-09T16:31:15.113Z] Copying: 377/1024 [MB] (10 MBps) [2024-11-09T16:31:16.498Z] Copying: 389/1024 [MB] (12 MBps) [2024-11-09T16:31:17.440Z] Copying: 408/1024 [MB] (19 MBps) [2024-11-09T16:31:18.381Z] Copying: 427/1024 [MB] (18 MBps) [2024-11-09T16:31:19.325Z] Copying: 439/1024 [MB] (12 MBps) [2024-11-09T16:31:20.271Z] Copying: 463/1024 [MB] (24 MBps) [2024-11-09T16:31:21.218Z] Copying: 478/1024 [MB] (14 MBps) [2024-11-09T16:31:22.157Z] Copying: 490/1024 [MB] (12 MBps) [2024-11-09T16:31:23.101Z] Copying: 512/1024 [MB] (22 MBps) [2024-11-09T16:31:24.489Z] Copying: 528/1024 [MB] (15 MBps) [2024-11-09T16:31:25.434Z] Copying: 545/1024 [MB] (16 MBps) [2024-11-09T16:31:26.378Z] Copying: 558/1024 [MB] (12 MBps) [2024-11-09T16:31:27.320Z] Copying: 576/1024 [MB] (18 MBps) [2024-11-09T16:31:28.264Z] Copying: 594/1024 [MB] (18 MBps) [2024-11-09T16:31:29.206Z] Copying: 611/1024 [MB] (16 MBps) [2024-11-09T16:31:30.149Z] Copying: 627/1024 [MB] (15 MBps) [2024-11-09T16:31:31.094Z] Copying: 640/1024 [MB] (13 MBps) [2024-11-09T16:31:32.480Z] Copying: 658/1024 [MB] (17 MBps) [2024-11-09T16:31:33.427Z] Copying: 674/1024 [MB] (16 MBps) [2024-11-09T16:31:34.371Z] Copying: 684/1024 [MB] (10 MBps) [2024-11-09T16:31:35.317Z] Copying: 695/1024 [MB] (10 MBps) [2024-11-09T16:31:36.264Z] Copying: 705/1024 [MB] (10 MBps) [2024-11-09T16:31:37.208Z] Copying: 717/1024 [MB] (12 MBps) [2024-11-09T16:31:38.154Z] Copying: 737/1024 [MB] (19 MBps) [2024-11-09T16:31:39.100Z] Copying: 748/1024 [MB] (10 MBps) [2024-11-09T16:31:40.489Z] Copying: 759/1024 [MB] (10 MBps) [2024-11-09T16:31:41.432Z] Copying: 774/1024 [MB] (15 MBps) [2024-11-09T16:31:42.378Z] Copying: 794/1024 [MB] (19 MBps) [2024-11-09T16:31:43.320Z] Copying: 808/1024 [MB] (14 MBps) [2024-11-09T16:31:44.263Z] Copying: 824/1024 [MB] (16 MBps) [2024-11-09T16:31:45.208Z] Copying: 839/1024 [MB] (14 MBps) [2024-11-09T16:31:46.153Z] Copying: 852/1024 [MB] (13 MBps) [2024-11-09T16:31:47.098Z] Copying: 869/1024 [MB] (16 MBps) [2024-11-09T16:31:48.485Z] Copying: 887/1024 [MB] (17 MBps) [2024-11-09T16:31:49.429Z] Copying: 899/1024 [MB] (12 MBps) [2024-11-09T16:31:50.375Z] Copying: 909/1024 [MB] (10 MBps) [2024-11-09T16:31:51.318Z] Copying: 926/1024 [MB] (16 MBps) [2024-11-09T16:31:52.263Z] Copying: 945/1024 [MB] (19 MBps) [2024-11-09T16:31:53.207Z] Copying: 958/1024 [MB] (12 MBps) [2024-11-09T16:31:54.150Z] Copying: 978/1024 [MB] (20 MBps) [2024-11-09T16:31:55.094Z] Copying: 991/1024 [MB] (12 MBps) [2024-11-09T16:31:56.482Z] Copying: 1002/1024 [MB] (10 MBps) [2024-11-09T16:31:57.425Z] Copying: 1012/1024 [MB] (10 MBps) [2024-11-09T16:31:57.425Z] Copying: 1023/1024 [MB] (10 MBps) [2024-11-09T16:31:57.425Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-09 16:31:57.374006] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.655 [2024-11-09 16:31:57.374089] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:37.655 [2024-11-09 16:31:57.374124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:37.655 [2024-11-09 16:31:57.374134] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.655 [2024-11-09 16:31:57.374159] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:37.655 [2024-11-09 16:31:57.377316] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.655 [2024-11-09 16:31:57.377355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:37.655 [2024-11-09 16:31:57.377367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.139 ms 00:22:37.655 [2024-11-09 16:31:57.377377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.655 [2024-11-09 16:31:57.377645] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.655 [2024-11-09 16:31:57.377656] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:37.655 [2024-11-09 16:31:57.377670] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:22:37.655 [2024-11-09 16:31:57.377678] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.655 [2024-11-09 16:31:57.384992] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.655 [2024-11-09 16:31:57.385034] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:37.655 [2024-11-09 16:31:57.385046] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.295 ms 00:22:37.655 [2024-11-09 16:31:57.385054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.655 [2024-11-09 16:31:57.393081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.655 [2024-11-09 16:31:57.393118] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:22:37.655 [2024-11-09 16:31:57.393129] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.944 ms 00:22:37.655 [2024-11-09 16:31:57.393145] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.655 [2024-11-09 16:31:57.419857] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.655 [2024-11-09 16:31:57.419894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:37.655 [2024-11-09 16:31:57.419908] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.640 ms 00:22:37.655 [2024-11-09 16:31:57.419916] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:37.916 [2024-11-09 16:31:57.436836] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:37.916 [2024-11-09 16:31:57.436872] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:37.916 [2024-11-09 16:31:57.436885] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.873 ms 00:22:37.917 [2024-11-09 16:31:57.436894] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.180 [2024-11-09 16:31:57.751079] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.180 [2024-11-09 16:31:57.751129] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:38.180 [2024-11-09 16:31:57.751142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 314.132 ms 00:22:38.180 [2024-11-09 16:31:57.751150] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.180 [2024-11-09 16:31:57.776384] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.180 [2024-11-09 16:31:57.776426] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:22:38.180 [2024-11-09 16:31:57.776438] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.210 ms 00:22:38.180 [2024-11-09 16:31:57.776445] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.180 [2024-11-09 16:31:57.801460] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.180 [2024-11-09 16:31:57.801505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:22:38.180 [2024-11-09 16:31:57.801517] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.975 ms 00:22:38.180 [2024-11-09 16:31:57.801537] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.180 [2024-11-09 16:31:57.826455] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.180 [2024-11-09 16:31:57.826643] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:38.180 [2024-11-09 16:31:57.826665] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.876 ms 00:22:38.180 [2024-11-09 16:31:57.826673] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.180 [2024-11-09 16:31:57.851495] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.180 [2024-11-09 16:31:57.851539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:38.180 [2024-11-09 16:31:57.851551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.645 ms 00:22:38.180 [2024-11-09 16:31:57.851558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.180 [2024-11-09 16:31:57.851600] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:38.180 [2024-11-09 16:31:57.851616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 133888 / 261120 wr_cnt: 1 state: open 00:22:38.180 [2024-11-09 16:31:57.851627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:38.180 [2024-11-09 16:31:57.851746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.851995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:38.181 [2024-11-09 16:31:57.852436] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:38.181 [2024-11-09 16:31:57.852445] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: f312cd53-be96-4ae1-a0bf-7bec45782f5f 00:22:38.181 [2024-11-09 16:31:57.852454] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 133888 00:22:38.181 [2024-11-09 16:31:57.852461] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 41152 00:22:38.181 [2024-11-09 16:31:57.852468] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 40192 00:22:38.181 [2024-11-09 16:31:57.852483] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0239 00:22:38.181 [2024-11-09 16:31:57.852490] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:38.181 [2024-11-09 16:31:57.852499] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:38.181 [2024-11-09 16:31:57.852506] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:38.182 [2024-11-09 16:31:57.852513] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:38.182 [2024-11-09 16:31:57.852526] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:38.182 [2024-11-09 16:31:57.852535] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.182 [2024-11-09 16:31:57.852543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:38.182 [2024-11-09 16:31:57.852551] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.935 ms 00:22:38.182 [2024-11-09 16:31:57.852559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.182 [2024-11-09 16:31:57.866416] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.182 [2024-11-09 16:31:57.866460] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:38.182 [2024-11-09 16:31:57.866471] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.824 ms 00:22:38.182 [2024-11-09 16:31:57.866479] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.182 [2024-11-09 16:31:57.866703] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:38.182 [2024-11-09 16:31:57.866713] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:38.182 [2024-11-09 16:31:57.866722] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:22:38.182 [2024-11-09 16:31:57.866730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.182 [2024-11-09 16:31:57.905946] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.182 [2024-11-09 16:31:57.906137] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:38.182 [2024-11-09 16:31:57.906158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.182 [2024-11-09 16:31:57.906166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.182 [2024-11-09 16:31:57.906254] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.182 [2024-11-09 16:31:57.906264] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:38.182 [2024-11-09 16:31:57.906273] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.182 [2024-11-09 16:31:57.906281] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.182 [2024-11-09 16:31:57.906355] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.182 [2024-11-09 16:31:57.906369] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:38.182 [2024-11-09 16:31:57.906377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.182 [2024-11-09 16:31:57.906385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.182 [2024-11-09 16:31:57.906401] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.182 [2024-11-09 16:31:57.906410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:38.182 [2024-11-09 16:31:57.906418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.182 [2024-11-09 16:31:57.906426] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.444 [2024-11-09 16:31:57.987179] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.444 [2024-11-09 16:31:57.987247] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:38.444 [2024-11-09 16:31:57.987261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.444 [2024-11-09 16:31:57.987269] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.444 [2024-11-09 16:31:58.018523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.444 [2024-11-09 16:31:58.018567] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:38.444 [2024-11-09 16:31:58.018578] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.444 [2024-11-09 16:31:58.018587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.444 [2024-11-09 16:31:58.018654] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.444 [2024-11-09 16:31:58.018664] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:38.444 [2024-11-09 16:31:58.018680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.444 [2024-11-09 16:31:58.018688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.444 [2024-11-09 16:31:58.018732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.444 [2024-11-09 16:31:58.018742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:38.444 [2024-11-09 16:31:58.018751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.444 [2024-11-09 16:31:58.018759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.444 [2024-11-09 16:31:58.018855] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.444 [2024-11-09 16:31:58.018865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:38.444 [2024-11-09 16:31:58.018873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.444 [2024-11-09 16:31:58.018884] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.444 [2024-11-09 16:31:58.018916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.444 [2024-11-09 16:31:58.018924] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:38.444 [2024-11-09 16:31:58.018932] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.444 [2024-11-09 16:31:58.018940] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.444 [2024-11-09 16:31:58.018983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.444 [2024-11-09 16:31:58.018992] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:38.444 [2024-11-09 16:31:58.019000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.444 [2024-11-09 16:31:58.019011] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.444 [2024-11-09 16:31:58.019059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:38.444 [2024-11-09 16:31:58.019069] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:38.444 [2024-11-09 16:31:58.019077] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:38.444 [2024-11-09 16:31:58.019085] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:38.444 [2024-11-09 16:31:58.019220] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 645.183 ms, result 0 00:22:39.387 00:22:39.387 00:22:39.387 16:31:58 -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:41.935 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:41.935 16:32:01 -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:41.935 16:32:01 -- ftl/restore.sh@85 -- # restore_kill 00:22:41.935 16:32:01 -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:41.935 16:32:01 -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:41.935 16:32:01 -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:41.935 Process with pid 73125 is not found 00:22:41.935 Remove shared memory files 00:22:41.935 16:32:01 -- ftl/restore.sh@32 -- # killprocess 73125 00:22:41.935 16:32:01 -- common/autotest_common.sh@936 -- # '[' -z 73125 ']' 00:22:41.936 16:32:01 -- common/autotest_common.sh@940 -- # kill -0 73125 00:22:41.936 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (73125) - No such process 00:22:41.936 16:32:01 -- common/autotest_common.sh@963 -- # echo 'Process with pid 73125 is not found' 00:22:41.936 16:32:01 -- ftl/restore.sh@33 -- # remove_shm 00:22:41.936 16:32:01 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:41.936 16:32:01 -- ftl/common.sh@205 -- # rm -f rm -f 00:22:41.936 16:32:01 -- ftl/common.sh@206 -- # rm -f rm -f 00:22:41.936 16:32:01 -- ftl/common.sh@207 -- # rm -f rm -f 00:22:41.936 16:32:01 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:41.936 16:32:01 -- ftl/common.sh@209 -- # rm -f rm -f 00:22:41.936 ************************************ 00:22:41.936 END TEST ftl_restore 00:22:41.936 ************************************ 00:22:41.936 00:22:41.936 real 4m42.902s 00:22:41.936 user 4m30.010s 00:22:41.936 sys 0m12.638s 00:22:41.936 16:32:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:41.936 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:22:41.936 16:32:01 -- ftl/ftl.sh@78 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:41.936 16:32:01 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:22:41.936 16:32:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:41.936 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:22:41.936 ************************************ 00:22:41.936 START TEST ftl_dirty_shutdown 00:22:41.936 ************************************ 00:22:41.936 16:32:01 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:06.0 0000:00:07.0 00:22:41.936 * Looking for test storage... 00:22:41.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:41.936 16:32:01 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:41.936 16:32:01 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:41.936 16:32:01 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:41.936 16:32:01 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:41.936 16:32:01 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:41.936 16:32:01 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:41.936 16:32:01 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:41.936 16:32:01 -- scripts/common.sh@335 -- # IFS=.-: 00:22:41.936 16:32:01 -- scripts/common.sh@335 -- # read -ra ver1 00:22:41.936 16:32:01 -- scripts/common.sh@336 -- # IFS=.-: 00:22:41.936 16:32:01 -- scripts/common.sh@336 -- # read -ra ver2 00:22:41.936 16:32:01 -- scripts/common.sh@337 -- # local 'op=<' 00:22:41.936 16:32:01 -- scripts/common.sh@339 -- # ver1_l=2 00:22:41.936 16:32:01 -- scripts/common.sh@340 -- # ver2_l=1 00:22:41.936 16:32:01 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:41.936 16:32:01 -- scripts/common.sh@343 -- # case "$op" in 00:22:41.936 16:32:01 -- scripts/common.sh@344 -- # : 1 00:22:41.936 16:32:01 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:41.936 16:32:01 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:41.936 16:32:01 -- scripts/common.sh@364 -- # decimal 1 00:22:41.936 16:32:01 -- scripts/common.sh@352 -- # local d=1 00:22:41.936 16:32:01 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:41.936 16:32:01 -- scripts/common.sh@354 -- # echo 1 00:22:41.936 16:32:01 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:41.936 16:32:01 -- scripts/common.sh@365 -- # decimal 2 00:22:41.936 16:32:01 -- scripts/common.sh@352 -- # local d=2 00:22:41.936 16:32:01 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:41.936 16:32:01 -- scripts/common.sh@354 -- # echo 2 00:22:41.936 16:32:01 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:41.936 16:32:01 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:41.936 16:32:01 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:41.936 16:32:01 -- scripts/common.sh@367 -- # return 0 00:22:41.936 16:32:01 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:41.936 16:32:01 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:41.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.936 --rc genhtml_branch_coverage=1 00:22:41.936 --rc genhtml_function_coverage=1 00:22:41.936 --rc genhtml_legend=1 00:22:41.936 --rc geninfo_all_blocks=1 00:22:41.936 --rc geninfo_unexecuted_blocks=1 00:22:41.936 00:22:41.936 ' 00:22:41.936 16:32:01 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:41.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.936 --rc genhtml_branch_coverage=1 00:22:41.936 --rc genhtml_function_coverage=1 00:22:41.936 --rc genhtml_legend=1 00:22:41.936 --rc geninfo_all_blocks=1 00:22:41.936 --rc geninfo_unexecuted_blocks=1 00:22:41.936 00:22:41.936 ' 00:22:41.936 16:32:01 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:41.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.936 --rc genhtml_branch_coverage=1 00:22:41.936 --rc genhtml_function_coverage=1 00:22:41.936 --rc genhtml_legend=1 00:22:41.936 --rc geninfo_all_blocks=1 00:22:41.936 --rc geninfo_unexecuted_blocks=1 00:22:41.936 00:22:41.936 ' 00:22:41.936 16:32:01 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:41.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:41.936 --rc genhtml_branch_coverage=1 00:22:41.936 --rc genhtml_function_coverage=1 00:22:41.936 --rc genhtml_legend=1 00:22:41.936 --rc geninfo_all_blocks=1 00:22:41.936 --rc geninfo_unexecuted_blocks=1 00:22:41.936 00:22:41.936 ' 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:41.936 16:32:01 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:41.936 16:32:01 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:41.936 16:32:01 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:41.936 16:32:01 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:41.936 16:32:01 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:41.936 16:32:01 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:41.936 16:32:01 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:41.936 16:32:01 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:41.936 16:32:01 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:41.936 16:32:01 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:41.936 16:32:01 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:41.936 16:32:01 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:41.936 16:32:01 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:41.936 16:32:01 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:41.936 16:32:01 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:41.936 16:32:01 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:41.936 16:32:01 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:41.936 16:32:01 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:41.936 16:32:01 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:41.936 16:32:01 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:41.936 16:32:01 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:41.936 16:32:01 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:41.936 16:32:01 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:41.936 16:32:01 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:41.936 16:32:01 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:41.936 16:32:01 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:41.936 16:32:01 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:41.936 16:32:01 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:06.0 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:07.0 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@45 -- # svcpid=76179 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:41.936 16:32:01 -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76179 00:22:41.936 16:32:01 -- common/autotest_common.sh@829 -- # '[' -z 76179 ']' 00:22:41.936 16:32:01 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.936 16:32:01 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.936 16:32:01 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.936 16:32:01 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.936 16:32:01 -- common/autotest_common.sh@10 -- # set +x 00:22:41.936 [2024-11-09 16:32:01.630278] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:41.937 [2024-11-09 16:32:01.630661] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76179 ] 00:22:42.199 [2024-11-09 16:32:01.780405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.459 [2024-11-09 16:32:02.001763] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:42.459 [2024-11-09 16:32:02.002160] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.846 16:32:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:43.846 16:32:03 -- common/autotest_common.sh@862 -- # return 0 00:22:43.846 16:32:03 -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:07.0 103424 00:22:43.846 16:32:03 -- ftl/common.sh@54 -- # local name=nvme0 00:22:43.846 16:32:03 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:22:43.846 16:32:03 -- ftl/common.sh@56 -- # local size=103424 00:22:43.846 16:32:03 -- ftl/common.sh@59 -- # local base_bdev 00:22:43.846 16:32:03 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:22:43.846 16:32:03 -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:43.846 16:32:03 -- ftl/common.sh@62 -- # local base_size 00:22:43.846 16:32:03 -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:43.846 16:32:03 -- common/autotest_common.sh@1367 -- # local bdev_name=nvme0n1 00:22:43.846 16:32:03 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:43.846 16:32:03 -- common/autotest_common.sh@1369 -- # local bs 00:22:43.846 16:32:03 -- common/autotest_common.sh@1370 -- # local nb 00:22:43.846 16:32:03 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:44.108 16:32:03 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:44.108 { 00:22:44.108 "name": "nvme0n1", 00:22:44.108 "aliases": [ 00:22:44.108 "eb264311-aae7-440d-8694-6ba581fd33d6" 00:22:44.108 ], 00:22:44.108 "product_name": "NVMe disk", 00:22:44.108 "block_size": 4096, 00:22:44.108 "num_blocks": 1310720, 00:22:44.108 "uuid": "eb264311-aae7-440d-8694-6ba581fd33d6", 00:22:44.108 "assigned_rate_limits": { 00:22:44.108 "rw_ios_per_sec": 0, 00:22:44.108 "rw_mbytes_per_sec": 0, 00:22:44.108 "r_mbytes_per_sec": 0, 00:22:44.108 "w_mbytes_per_sec": 0 00:22:44.108 }, 00:22:44.108 "claimed": true, 00:22:44.108 "claim_type": "read_many_write_one", 00:22:44.108 "zoned": false, 00:22:44.108 "supported_io_types": { 00:22:44.108 "read": true, 00:22:44.108 "write": true, 00:22:44.108 "unmap": true, 00:22:44.108 "write_zeroes": true, 00:22:44.108 "flush": true, 00:22:44.108 "reset": true, 00:22:44.108 "compare": true, 00:22:44.108 "compare_and_write": false, 00:22:44.108 "abort": true, 00:22:44.108 "nvme_admin": true, 00:22:44.108 "nvme_io": true 00:22:44.108 }, 00:22:44.108 "driver_specific": { 00:22:44.108 "nvme": [ 00:22:44.108 { 00:22:44.108 "pci_address": "0000:00:07.0", 00:22:44.108 "trid": { 00:22:44.108 "trtype": "PCIe", 00:22:44.108 "traddr": "0000:00:07.0" 00:22:44.108 }, 00:22:44.108 "ctrlr_data": { 00:22:44.108 "cntlid": 0, 00:22:44.108 "vendor_id": "0x1b36", 00:22:44.108 "model_number": "QEMU NVMe Ctrl", 00:22:44.108 "serial_number": "12341", 00:22:44.108 "firmware_revision": "8.0.0", 00:22:44.108 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:44.108 "oacs": { 00:22:44.108 "security": 0, 00:22:44.108 "format": 1, 00:22:44.108 "firmware": 0, 00:22:44.108 "ns_manage": 1 00:22:44.108 }, 00:22:44.108 "multi_ctrlr": false, 00:22:44.108 "ana_reporting": false 00:22:44.108 }, 00:22:44.108 "vs": { 00:22:44.108 "nvme_version": "1.4" 00:22:44.108 }, 00:22:44.108 "ns_data": { 00:22:44.108 "id": 1, 00:22:44.108 "can_share": false 00:22:44.108 } 00:22:44.108 } 00:22:44.108 ], 00:22:44.108 "mp_policy": "active_passive" 00:22:44.108 } 00:22:44.108 } 00:22:44.108 ]' 00:22:44.108 16:32:03 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:44.108 16:32:03 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:44.108 16:32:03 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:44.108 16:32:03 -- common/autotest_common.sh@1373 -- # nb=1310720 00:22:44.108 16:32:03 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:22:44.108 16:32:03 -- common/autotest_common.sh@1377 -- # echo 5120 00:22:44.108 16:32:03 -- ftl/common.sh@63 -- # base_size=5120 00:22:44.108 16:32:03 -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:44.108 16:32:03 -- ftl/common.sh@67 -- # clear_lvols 00:22:44.108 16:32:03 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:44.108 16:32:03 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:44.369 16:32:03 -- ftl/common.sh@28 -- # stores=6f524507-c819-4902-afb8-e8bd68486407 00:22:44.369 16:32:03 -- ftl/common.sh@29 -- # for lvs in $stores 00:22:44.369 16:32:03 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f524507-c819-4902-afb8-e8bd68486407 00:22:44.667 16:32:04 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:44.667 16:32:04 -- ftl/common.sh@68 -- # lvs=8a65c4cb-9e44-4c36-b772-c6b150cda28d 00:22:44.667 16:32:04 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8a65c4cb-9e44-4c36-b772-c6b150cda28d 00:22:44.929 16:32:04 -- ftl/dirty_shutdown.sh@49 -- # split_bdev=52cb722b-13fe-4faf-93bc-e0bcb87a8a41 00:22:44.929 16:32:04 -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:06.0 ']' 00:22:44.929 16:32:04 -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:06.0 52cb722b-13fe-4faf-93bc-e0bcb87a8a41 00:22:44.929 16:32:04 -- ftl/common.sh@35 -- # local name=nvc0 00:22:44.929 16:32:04 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:22:44.929 16:32:04 -- ftl/common.sh@37 -- # local base_bdev=52cb722b-13fe-4faf-93bc-e0bcb87a8a41 00:22:44.929 16:32:04 -- ftl/common.sh@38 -- # local cache_size= 00:22:44.929 16:32:04 -- ftl/common.sh@41 -- # get_bdev_size 52cb722b-13fe-4faf-93bc-e0bcb87a8a41 00:22:44.929 16:32:04 -- common/autotest_common.sh@1367 -- # local bdev_name=52cb722b-13fe-4faf-93bc-e0bcb87a8a41 00:22:44.929 16:32:04 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:44.929 16:32:04 -- common/autotest_common.sh@1369 -- # local bs 00:22:44.929 16:32:04 -- common/autotest_common.sh@1370 -- # local nb 00:22:44.929 16:32:04 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 52cb722b-13fe-4faf-93bc-e0bcb87a8a41 00:22:45.190 16:32:04 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:45.190 { 00:22:45.190 "name": "52cb722b-13fe-4faf-93bc-e0bcb87a8a41", 00:22:45.190 "aliases": [ 00:22:45.190 "lvs/nvme0n1p0" 00:22:45.190 ], 00:22:45.190 "product_name": "Logical Volume", 00:22:45.190 "block_size": 4096, 00:22:45.190 "num_blocks": 26476544, 00:22:45.190 "uuid": "52cb722b-13fe-4faf-93bc-e0bcb87a8a41", 00:22:45.190 "assigned_rate_limits": { 00:22:45.190 "rw_ios_per_sec": 0, 00:22:45.190 "rw_mbytes_per_sec": 0, 00:22:45.190 "r_mbytes_per_sec": 0, 00:22:45.190 "w_mbytes_per_sec": 0 00:22:45.190 }, 00:22:45.190 "claimed": false, 00:22:45.190 "zoned": false, 00:22:45.190 "supported_io_types": { 00:22:45.190 "read": true, 00:22:45.190 "write": true, 00:22:45.190 "unmap": true, 00:22:45.190 "write_zeroes": true, 00:22:45.190 "flush": false, 00:22:45.190 "reset": true, 00:22:45.190 "compare": false, 00:22:45.190 "compare_and_write": false, 00:22:45.190 "abort": false, 00:22:45.190 "nvme_admin": false, 00:22:45.190 "nvme_io": false 00:22:45.190 }, 00:22:45.190 "driver_specific": { 00:22:45.190 "lvol": { 00:22:45.190 "lvol_store_uuid": "8a65c4cb-9e44-4c36-b772-c6b150cda28d", 00:22:45.190 "base_bdev": "nvme0n1", 00:22:45.190 "thin_provision": true, 00:22:45.190 "snapshot": false, 00:22:45.190 "clone": false, 00:22:45.190 "esnap_clone": false 00:22:45.190 } 00:22:45.190 } 00:22:45.190 } 00:22:45.190 ]' 00:22:45.190 16:32:04 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:45.190 16:32:04 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:45.190 16:32:04 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:45.190 16:32:04 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:45.190 16:32:04 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:45.190 16:32:04 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:45.190 16:32:04 -- ftl/common.sh@41 -- # local base_size=5171 00:22:45.190 16:32:04 -- ftl/common.sh@44 -- # local nvc_bdev 00:22:45.190 16:32:04 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:06.0 00:22:45.452 16:32:05 -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:45.452 16:32:05 -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:45.452 16:32:05 -- ftl/common.sh@48 -- # get_bdev_size 52cb722b-13fe-4faf-93bc-e0bcb87a8a41 00:22:45.452 16:32:05 -- common/autotest_common.sh@1367 -- # local bdev_name=52cb722b-13fe-4faf-93bc-e0bcb87a8a41 00:22:45.452 16:32:05 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:45.452 16:32:05 -- common/autotest_common.sh@1369 -- # local bs 00:22:45.452 16:32:05 -- common/autotest_common.sh@1370 -- # local nb 00:22:45.452 16:32:05 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 52cb722b-13fe-4faf-93bc-e0bcb87a8a41 00:22:45.712 16:32:05 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:45.712 { 00:22:45.712 "name": "52cb722b-13fe-4faf-93bc-e0bcb87a8a41", 00:22:45.712 "aliases": [ 00:22:45.712 "lvs/nvme0n1p0" 00:22:45.712 ], 00:22:45.712 "product_name": "Logical Volume", 00:22:45.712 "block_size": 4096, 00:22:45.712 "num_blocks": 26476544, 00:22:45.712 "uuid": "52cb722b-13fe-4faf-93bc-e0bcb87a8a41", 00:22:45.712 "assigned_rate_limits": { 00:22:45.712 "rw_ios_per_sec": 0, 00:22:45.712 "rw_mbytes_per_sec": 0, 00:22:45.712 "r_mbytes_per_sec": 0, 00:22:45.712 "w_mbytes_per_sec": 0 00:22:45.712 }, 00:22:45.712 "claimed": false, 00:22:45.712 "zoned": false, 00:22:45.712 "supported_io_types": { 00:22:45.712 "read": true, 00:22:45.712 "write": true, 00:22:45.712 "unmap": true, 00:22:45.712 "write_zeroes": true, 00:22:45.712 "flush": false, 00:22:45.712 "reset": true, 00:22:45.712 "compare": false, 00:22:45.712 "compare_and_write": false, 00:22:45.712 "abort": false, 00:22:45.712 "nvme_admin": false, 00:22:45.712 "nvme_io": false 00:22:45.712 }, 00:22:45.712 "driver_specific": { 00:22:45.712 "lvol": { 00:22:45.712 "lvol_store_uuid": "8a65c4cb-9e44-4c36-b772-c6b150cda28d", 00:22:45.712 "base_bdev": "nvme0n1", 00:22:45.712 "thin_provision": true, 00:22:45.712 "snapshot": false, 00:22:45.712 "clone": false, 00:22:45.712 "esnap_clone": false 00:22:45.712 } 00:22:45.712 } 00:22:45.712 } 00:22:45.712 ]' 00:22:45.712 16:32:05 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:45.712 16:32:05 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:45.712 16:32:05 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:45.712 16:32:05 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:45.712 16:32:05 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:45.712 16:32:05 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:45.712 16:32:05 -- ftl/common.sh@48 -- # cache_size=5171 00:22:45.712 16:32:05 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:45.970 16:32:05 -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:22:45.970 16:32:05 -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 52cb722b-13fe-4faf-93bc-e0bcb87a8a41 00:22:45.970 16:32:05 -- common/autotest_common.sh@1367 -- # local bdev_name=52cb722b-13fe-4faf-93bc-e0bcb87a8a41 00:22:45.970 16:32:05 -- common/autotest_common.sh@1368 -- # local bdev_info 00:22:45.970 16:32:05 -- common/autotest_common.sh@1369 -- # local bs 00:22:45.970 16:32:05 -- common/autotest_common.sh@1370 -- # local nb 00:22:45.970 16:32:05 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 52cb722b-13fe-4faf-93bc-e0bcb87a8a41 00:22:46.229 16:32:05 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:22:46.229 { 00:22:46.229 "name": "52cb722b-13fe-4faf-93bc-e0bcb87a8a41", 00:22:46.229 "aliases": [ 00:22:46.229 "lvs/nvme0n1p0" 00:22:46.229 ], 00:22:46.229 "product_name": "Logical Volume", 00:22:46.229 "block_size": 4096, 00:22:46.229 "num_blocks": 26476544, 00:22:46.229 "uuid": "52cb722b-13fe-4faf-93bc-e0bcb87a8a41", 00:22:46.229 "assigned_rate_limits": { 00:22:46.229 "rw_ios_per_sec": 0, 00:22:46.229 "rw_mbytes_per_sec": 0, 00:22:46.229 "r_mbytes_per_sec": 0, 00:22:46.229 "w_mbytes_per_sec": 0 00:22:46.229 }, 00:22:46.229 "claimed": false, 00:22:46.229 "zoned": false, 00:22:46.229 "supported_io_types": { 00:22:46.229 "read": true, 00:22:46.229 "write": true, 00:22:46.229 "unmap": true, 00:22:46.229 "write_zeroes": true, 00:22:46.229 "flush": false, 00:22:46.229 "reset": true, 00:22:46.229 "compare": false, 00:22:46.229 "compare_and_write": false, 00:22:46.229 "abort": false, 00:22:46.229 "nvme_admin": false, 00:22:46.229 "nvme_io": false 00:22:46.229 }, 00:22:46.229 "driver_specific": { 00:22:46.229 "lvol": { 00:22:46.229 "lvol_store_uuid": "8a65c4cb-9e44-4c36-b772-c6b150cda28d", 00:22:46.229 "base_bdev": "nvme0n1", 00:22:46.229 "thin_provision": true, 00:22:46.229 "snapshot": false, 00:22:46.229 "clone": false, 00:22:46.229 "esnap_clone": false 00:22:46.229 } 00:22:46.229 } 00:22:46.229 } 00:22:46.229 ]' 00:22:46.229 16:32:05 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:22:46.229 16:32:05 -- common/autotest_common.sh@1372 -- # bs=4096 00:22:46.229 16:32:05 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:22:46.229 16:32:05 -- common/autotest_common.sh@1373 -- # nb=26476544 00:22:46.229 16:32:05 -- common/autotest_common.sh@1376 -- # bdev_size=103424 00:22:46.229 16:32:05 -- common/autotest_common.sh@1377 -- # echo 103424 00:22:46.229 16:32:05 -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:22:46.229 16:32:05 -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 52cb722b-13fe-4faf-93bc-e0bcb87a8a41 --l2p_dram_limit 10' 00:22:46.229 16:32:05 -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:22:46.229 16:32:05 -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:06.0 ']' 00:22:46.229 16:32:05 -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:46.229 16:32:05 -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 52cb722b-13fe-4faf-93bc-e0bcb87a8a41 --l2p_dram_limit 10 -c nvc0n1p0 00:22:46.489 [2024-11-09 16:32:06.001735] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.489 [2024-11-09 16:32:06.001771] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:46.489 [2024-11-09 16:32:06.001783] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:46.489 [2024-11-09 16:32:06.001790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.489 [2024-11-09 16:32:06.001832] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.489 [2024-11-09 16:32:06.001840] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:46.489 [2024-11-09 16:32:06.001848] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:46.489 [2024-11-09 16:32:06.001854] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.489 [2024-11-09 16:32:06.001869] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:46.489 [2024-11-09 16:32:06.002479] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:46.489 [2024-11-09 16:32:06.002497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.489 [2024-11-09 16:32:06.002503] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:46.489 [2024-11-09 16:32:06.002511] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:22:46.489 [2024-11-09 16:32:06.002517] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.489 [2024-11-09 16:32:06.002542] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 902f49f9-b410-4954-8a62-8ab8809a921f 00:22:46.489 [2024-11-09 16:32:06.003466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.489 [2024-11-09 16:32:06.003484] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:46.489 [2024-11-09 16:32:06.003492] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:46.489 [2024-11-09 16:32:06.003499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.489 [2024-11-09 16:32:06.008360] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.489 [2024-11-09 16:32:06.008397] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:46.489 [2024-11-09 16:32:06.008408] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.618 ms 00:22:46.489 [2024-11-09 16:32:06.008416] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.489 [2024-11-09 16:32:06.008486] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.489 [2024-11-09 16:32:06.008496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:46.489 [2024-11-09 16:32:06.008502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:46.489 [2024-11-09 16:32:06.008513] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.489 [2024-11-09 16:32:06.008556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.489 [2024-11-09 16:32:06.008568] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:46.489 [2024-11-09 16:32:06.008574] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:46.489 [2024-11-09 16:32:06.008581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.489 [2024-11-09 16:32:06.008600] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:46.489 [2024-11-09 16:32:06.011558] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.489 [2024-11-09 16:32:06.011581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:46.489 [2024-11-09 16:32:06.011590] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.963 ms 00:22:46.489 [2024-11-09 16:32:06.011596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.489 [2024-11-09 16:32:06.011622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.489 [2024-11-09 16:32:06.011628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:46.489 [2024-11-09 16:32:06.011636] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:46.489 [2024-11-09 16:32:06.011642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.489 [2024-11-09 16:32:06.011655] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:46.489 [2024-11-09 16:32:06.011743] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:22:46.489 [2024-11-09 16:32:06.011755] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:46.489 [2024-11-09 16:32:06.011762] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:22:46.489 [2024-11-09 16:32:06.011771] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:46.489 [2024-11-09 16:32:06.011778] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:46.489 [2024-11-09 16:32:06.011786] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:46.489 [2024-11-09 16:32:06.011798] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:46.489 [2024-11-09 16:32:06.011805] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:22:46.489 [2024-11-09 16:32:06.011810] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:22:46.489 [2024-11-09 16:32:06.011818] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.489 [2024-11-09 16:32:06.011823] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:46.489 [2024-11-09 16:32:06.011830] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.163 ms 00:22:46.489 [2024-11-09 16:32:06.011835] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.489 [2024-11-09 16:32:06.011884] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.489 [2024-11-09 16:32:06.011890] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:46.489 [2024-11-09 16:32:06.011896] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:46.489 [2024-11-09 16:32:06.011903] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.489 [2024-11-09 16:32:06.011959] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:46.489 [2024-11-09 16:32:06.011966] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:46.489 [2024-11-09 16:32:06.011973] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:46.489 [2024-11-09 16:32:06.011978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.489 [2024-11-09 16:32:06.011985] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:46.489 [2024-11-09 16:32:06.011990] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:46.489 [2024-11-09 16:32:06.011996] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:46.489 [2024-11-09 16:32:06.012001] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:46.489 [2024-11-09 16:32:06.012007] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:46.490 [2024-11-09 16:32:06.012012] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:46.490 [2024-11-09 16:32:06.012018] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:46.490 [2024-11-09 16:32:06.012023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:46.490 [2024-11-09 16:32:06.012031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:46.490 [2024-11-09 16:32:06.012036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:46.490 [2024-11-09 16:32:06.012042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:22:46.490 [2024-11-09 16:32:06.012046] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.490 [2024-11-09 16:32:06.012054] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:46.490 [2024-11-09 16:32:06.012059] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:22:46.490 [2024-11-09 16:32:06.012065] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.490 [2024-11-09 16:32:06.012070] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:22:46.490 [2024-11-09 16:32:06.012078] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:22:46.490 [2024-11-09 16:32:06.012083] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:22:46.490 [2024-11-09 16:32:06.012089] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:46.490 [2024-11-09 16:32:06.012094] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:46.490 [2024-11-09 16:32:06.012100] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:46.490 [2024-11-09 16:32:06.012105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:46.490 [2024-11-09 16:32:06.012111] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:22:46.490 [2024-11-09 16:32:06.012116] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:46.490 [2024-11-09 16:32:06.012122] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:46.490 [2024-11-09 16:32:06.012127] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:46.490 [2024-11-09 16:32:06.012132] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:46.490 [2024-11-09 16:32:06.012138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:46.490 [2024-11-09 16:32:06.012145] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:22:46.490 [2024-11-09 16:32:06.012150] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:22:46.490 [2024-11-09 16:32:06.012156] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:46.490 [2024-11-09 16:32:06.012160] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:46.490 [2024-11-09 16:32:06.012166] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:46.490 [2024-11-09 16:32:06.012171] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:46.490 [2024-11-09 16:32:06.012178] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:22:46.490 [2024-11-09 16:32:06.012183] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:46.490 [2024-11-09 16:32:06.012188] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:46.490 [2024-11-09 16:32:06.012194] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:46.490 [2024-11-09 16:32:06.012201] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:46.490 [2024-11-09 16:32:06.012206] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:46.490 [2024-11-09 16:32:06.012214] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:46.490 [2024-11-09 16:32:06.012219] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:46.490 [2024-11-09 16:32:06.012240] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:46.490 [2024-11-09 16:32:06.012246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:46.490 [2024-11-09 16:32:06.012255] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:46.490 [2024-11-09 16:32:06.012260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:46.490 [2024-11-09 16:32:06.012267] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:46.490 [2024-11-09 16:32:06.012274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:46.490 [2024-11-09 16:32:06.012283] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:46.490 [2024-11-09 16:32:06.012289] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:22:46.490 [2024-11-09 16:32:06.012295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:22:46.490 [2024-11-09 16:32:06.012301] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:22:46.490 [2024-11-09 16:32:06.012308] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:22:46.490 [2024-11-09 16:32:06.012336] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:22:46.490 [2024-11-09 16:32:06.012343] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:22:46.490 [2024-11-09 16:32:06.012349] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:22:46.490 [2024-11-09 16:32:06.012355] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:22:46.490 [2024-11-09 16:32:06.012360] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:22:46.490 [2024-11-09 16:32:06.012367] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:22:46.490 [2024-11-09 16:32:06.012373] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:22:46.490 [2024-11-09 16:32:06.012382] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:22:46.490 [2024-11-09 16:32:06.012388] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:46.490 [2024-11-09 16:32:06.012395] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:46.490 [2024-11-09 16:32:06.012402] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:46.490 [2024-11-09 16:32:06.012409] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:46.490 [2024-11-09 16:32:06.012415] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:46.490 [2024-11-09 16:32:06.012421] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:46.490 [2024-11-09 16:32:06.012427] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.490 [2024-11-09 16:32:06.012434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:46.490 [2024-11-09 16:32:06.012440] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.503 ms 00:22:46.490 [2024-11-09 16:32:06.012446] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.490 [2024-11-09 16:32:06.024380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.490 [2024-11-09 16:32:06.024406] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:46.490 [2024-11-09 16:32:06.024414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.894 ms 00:22:46.490 [2024-11-09 16:32:06.024422] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.490 [2024-11-09 16:32:06.024487] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.490 [2024-11-09 16:32:06.024496] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:46.490 [2024-11-09 16:32:06.024504] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:22:46.490 [2024-11-09 16:32:06.024511] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.490 [2024-11-09 16:32:06.048141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.490 [2024-11-09 16:32:06.048169] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:46.490 [2024-11-09 16:32:06.048177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.598 ms 00:22:46.490 [2024-11-09 16:32:06.048185] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.490 [2024-11-09 16:32:06.048206] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.490 [2024-11-09 16:32:06.048215] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:46.490 [2024-11-09 16:32:06.048222] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:22:46.490 [2024-11-09 16:32:06.048245] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.490 [2024-11-09 16:32:06.048532] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.490 [2024-11-09 16:32:06.048546] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:46.490 [2024-11-09 16:32:06.048554] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:22:46.490 [2024-11-09 16:32:06.048560] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.490 [2024-11-09 16:32:06.048644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.490 [2024-11-09 16:32:06.048654] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:46.490 [2024-11-09 16:32:06.048660] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:46.490 [2024-11-09 16:32:06.048667] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.490 [2024-11-09 16:32:06.060569] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.490 [2024-11-09 16:32:06.060594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:46.490 [2024-11-09 16:32:06.060601] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.888 ms 00:22:46.490 [2024-11-09 16:32:06.060608] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.490 [2024-11-09 16:32:06.069550] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:46.490 [2024-11-09 16:32:06.071962] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.490 [2024-11-09 16:32:06.071985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:46.490 [2024-11-09 16:32:06.071995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.300 ms 00:22:46.490 [2024-11-09 16:32:06.072001] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.490 [2024-11-09 16:32:06.141547] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:46.490 [2024-11-09 16:32:06.141589] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:46.491 [2024-11-09 16:32:06.141605] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.520 ms 00:22:46.491 [2024-11-09 16:32:06.141613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:46.491 [2024-11-09 16:32:06.141659] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] First startup needs to scrub nv cache data region, this may take some time. 00:22:46.491 [2024-11-09 16:32:06.141670] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 4GiB 00:22:50.690 [2024-11-09 16:32:09.780526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.690 [2024-11-09 16:32:09.780610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:50.690 [2024-11-09 16:32:09.780632] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3638.841 ms 00:22:50.690 [2024-11-09 16:32:09.780642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.690 [2024-11-09 16:32:09.780870] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.690 [2024-11-09 16:32:09.780885] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:50.690 [2024-11-09 16:32:09.780901] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:22:50.690 [2024-11-09 16:32:09.780909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.690 [2024-11-09 16:32:09.807517] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.690 [2024-11-09 16:32:09.807571] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:50.690 [2024-11-09 16:32:09.807588] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.548 ms 00:22:50.690 [2024-11-09 16:32:09.807596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.690 [2024-11-09 16:32:09.832803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.690 [2024-11-09 16:32:09.833107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:50.690 [2024-11-09 16:32:09.833141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.150 ms 00:22:50.690 [2024-11-09 16:32:09.833148] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.690 [2024-11-09 16:32:09.833581] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.690 [2024-11-09 16:32:09.833595] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:50.690 [2024-11-09 16:32:09.833607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.307 ms 00:22:50.690 [2024-11-09 16:32:09.833615] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.690 [2024-11-09 16:32:09.908819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.690 [2024-11-09 16:32:09.908866] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:50.690 [2024-11-09 16:32:09.908882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.152 ms 00:22:50.690 [2024-11-09 16:32:09.908891] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.690 [2024-11-09 16:32:09.936966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.690 [2024-11-09 16:32:09.937015] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:50.690 [2024-11-09 16:32:09.937031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.018 ms 00:22:50.690 [2024-11-09 16:32:09.937039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.690 [2024-11-09 16:32:09.938570] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.690 [2024-11-09 16:32:09.938617] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:22:50.690 [2024-11-09 16:32:09.938634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.464 ms 00:22:50.690 [2024-11-09 16:32:09.938642] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.690 [2024-11-09 16:32:09.964980] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.690 [2024-11-09 16:32:09.965030] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:50.690 [2024-11-09 16:32:09.965047] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.274 ms 00:22:50.690 [2024-11-09 16:32:09.965054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.690 [2024-11-09 16:32:09.965128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.690 [2024-11-09 16:32:09.965139] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:50.690 [2024-11-09 16:32:09.965153] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:50.690 [2024-11-09 16:32:09.965161] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.690 [2024-11-09 16:32:09.965286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:50.690 [2024-11-09 16:32:09.965299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:50.690 [2024-11-09 16:32:09.965310] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:50.690 [2024-11-09 16:32:09.965318] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:50.690 [2024-11-09 16:32:09.966462] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3964.184 ms, result 0 00:22:50.690 { 00:22:50.690 "name": "ftl0", 00:22:50.690 "uuid": "902f49f9-b410-4954-8a62-8ab8809a921f" 00:22:50.690 } 00:22:50.690 16:32:09 -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:22:50.690 16:32:09 -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:50.690 16:32:10 -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:22:50.690 16:32:10 -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:22:50.690 16:32:10 -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:22:50.690 /dev/nbd0 00:22:50.690 16:32:10 -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:22:50.690 16:32:10 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:22:50.690 16:32:10 -- common/autotest_common.sh@867 -- # local i 00:22:50.690 16:32:10 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:22:50.690 16:32:10 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:22:50.690 16:32:10 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:22:50.690 16:32:10 -- common/autotest_common.sh@871 -- # break 00:22:50.690 16:32:10 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:22:50.690 16:32:10 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:22:50.690 16:32:10 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:22:50.690 1+0 records in 00:22:50.690 1+0 records out 00:22:50.690 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000452849 s, 9.0 MB/s 00:22:50.690 16:32:10 -- common/autotest_common.sh@884 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:50.690 16:32:10 -- common/autotest_common.sh@884 -- # size=4096 00:22:50.690 16:32:10 -- common/autotest_common.sh@885 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:22:50.690 16:32:10 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:22:50.690 16:32:10 -- common/autotest_common.sh@887 -- # return 0 00:22:50.690 16:32:10 -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:22:50.952 [2024-11-09 16:32:10.525132] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:50.952 [2024-11-09 16:32:10.525300] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76334 ] 00:22:50.952 [2024-11-09 16:32:10.676911] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.214 [2024-11-09 16:32:10.954312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.595  [2024-11-09T16:32:13.300Z] Copying: 192/1024 [MB] (192 MBps) [2024-11-09T16:32:14.677Z] Copying: 418/1024 [MB] (225 MBps) [2024-11-09T16:32:15.613Z] Copying: 677/1024 [MB] (258 MBps) [2024-11-09T16:32:15.872Z] Copying: 929/1024 [MB] (252 MBps) [2024-11-09T16:32:16.439Z] Copying: 1024/1024 [MB] (average 234 MBps) 00:22:56.669 00:22:56.669 16:32:16 -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:59.202 16:32:18 -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 -r /var/tmp/spdk_dd.sock --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:22:59.202 [2024-11-09 16:32:18.537394] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:22:59.202 [2024-11-09 16:32:18.537514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76422 ] 00:22:59.202 [2024-11-09 16:32:18.687088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.202 [2024-11-09 16:32:18.854402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.584  [2024-11-09T16:32:21.298Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-09T16:32:22.239Z] Copying: 37/1024 [MB] (15 MBps) [2024-11-09T16:32:23.181Z] Copying: 53/1024 [MB] (16 MBps) [2024-11-09T16:32:24.126Z] Copying: 67/1024 [MB] (14 MBps) [2024-11-09T16:32:25.070Z] Copying: 83/1024 [MB] (15 MBps) [2024-11-09T16:32:26.453Z] Copying: 100/1024 [MB] (17 MBps) [2024-11-09T16:32:27.448Z] Copying: 116/1024 [MB] (16 MBps) [2024-11-09T16:32:28.391Z] Copying: 142/1024 [MB] (25 MBps) [2024-11-09T16:32:29.334Z] Copying: 159/1024 [MB] (17 MBps) [2024-11-09T16:32:30.278Z] Copying: 177/1024 [MB] (17 MBps) [2024-11-09T16:32:31.221Z] Copying: 195/1024 [MB] (17 MBps) [2024-11-09T16:32:32.160Z] Copying: 213/1024 [MB] (18 MBps) [2024-11-09T16:32:33.102Z] Copying: 233/1024 [MB] (20 MBps) [2024-11-09T16:32:34.512Z] Copying: 246/1024 [MB] (12 MBps) [2024-11-09T16:32:35.087Z] Copying: 259/1024 [MB] (13 MBps) [2024-11-09T16:32:36.468Z] Copying: 282/1024 [MB] (23 MBps) [2024-11-09T16:32:37.410Z] Copying: 307/1024 [MB] (24 MBps) [2024-11-09T16:32:38.353Z] Copying: 325/1024 [MB] (17 MBps) [2024-11-09T16:32:39.293Z] Copying: 340/1024 [MB] (14 MBps) [2024-11-09T16:32:40.233Z] Copying: 362/1024 [MB] (22 MBps) [2024-11-09T16:32:41.177Z] Copying: 378/1024 [MB] (16 MBps) [2024-11-09T16:32:42.122Z] Copying: 399/1024 [MB] (20 MBps) [2024-11-09T16:32:43.064Z] Copying: 415/1024 [MB] (16 MBps) [2024-11-09T16:32:44.452Z] Copying: 433/1024 [MB] (18 MBps) [2024-11-09T16:32:45.397Z] Copying: 449/1024 [MB] (15 MBps) [2024-11-09T16:32:46.342Z] Copying: 465/1024 [MB] (16 MBps) [2024-11-09T16:32:47.286Z] Copying: 479/1024 [MB] (14 MBps) [2024-11-09T16:32:48.225Z] Copying: 493/1024 [MB] (13 MBps) [2024-11-09T16:32:49.165Z] Copying: 510/1024 [MB] (16 MBps) [2024-11-09T16:32:50.100Z] Copying: 535/1024 [MB] (24 MBps) [2024-11-09T16:32:51.473Z] Copying: 564/1024 [MB] (29 MBps) [2024-11-09T16:32:52.407Z] Copying: 589/1024 [MB] (24 MBps) [2024-11-09T16:32:53.340Z] Copying: 615/1024 [MB] (26 MBps) [2024-11-09T16:32:54.272Z] Copying: 643/1024 [MB] (27 MBps) [2024-11-09T16:32:55.206Z] Copying: 670/1024 [MB] (26 MBps) [2024-11-09T16:32:56.140Z] Copying: 705/1024 [MB] (35 MBps) [2024-11-09T16:32:57.074Z] Copying: 735/1024 [MB] (29 MBps) [2024-11-09T16:32:58.448Z] Copying: 760/1024 [MB] (25 MBps) [2024-11-09T16:32:59.383Z] Copying: 785/1024 [MB] (25 MBps) [2024-11-09T16:33:00.317Z] Copying: 807/1024 [MB] (21 MBps) [2024-11-09T16:33:01.259Z] Copying: 830/1024 [MB] (22 MBps) [2024-11-09T16:33:02.199Z] Copying: 845/1024 [MB] (15 MBps) [2024-11-09T16:33:03.145Z] Copying: 855/1024 [MB] (10 MBps) [2024-11-09T16:33:04.086Z] Copying: 870/1024 [MB] (15 MBps) [2024-11-09T16:33:05.465Z] Copying: 885/1024 [MB] (14 MBps) [2024-11-09T16:33:06.406Z] Copying: 911/1024 [MB] (26 MBps) [2024-11-09T16:33:07.349Z] Copying: 934/1024 [MB] (23 MBps) [2024-11-09T16:33:08.293Z] Copying: 952/1024 [MB] (18 MBps) [2024-11-09T16:33:09.236Z] Copying: 970/1024 [MB] (17 MBps) [2024-11-09T16:33:10.189Z] Copying: 982/1024 [MB] (11 MBps) [2024-11-09T16:33:11.130Z] Copying: 995/1024 [MB] (12 MBps) [2024-11-09T16:33:11.700Z] Copying: 1014/1024 [MB] (19 MBps) [2024-11-09T16:33:12.266Z] Copying: 1024/1024 [MB] (average 19 MBps) 00:23:52.496 00:23:52.496 16:33:12 -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:52.496 16:33:12 -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:52.754 16:33:12 -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:53.012 [2024-11-09 16:33:12.536001] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.012 [2024-11-09 16:33:12.536041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:53.012 [2024-11-09 16:33:12.536053] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:53.012 [2024-11-09 16:33:12.536060] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.012 [2024-11-09 16:33:12.536078] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:53.012 [2024-11-09 16:33:12.537968] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.012 [2024-11-09 16:33:12.537990] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:53.012 [2024-11-09 16:33:12.538000] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.875 ms 00:23:53.012 [2024-11-09 16:33:12.538007] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.012 [2024-11-09 16:33:12.539563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.012 [2024-11-09 16:33:12.539668] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:53.012 [2024-11-09 16:33:12.539688] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.535 ms 00:23:53.012 [2024-11-09 16:33:12.539694] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.012 [2024-11-09 16:33:12.552667] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.012 [2024-11-09 16:33:12.552693] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:53.012 [2024-11-09 16:33:12.552703] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.955 ms 00:23:53.012 [2024-11-09 16:33:12.552709] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.012 [2024-11-09 16:33:12.557458] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.012 [2024-11-09 16:33:12.557481] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:23:53.012 [2024-11-09 16:33:12.557491] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.718 ms 00:23:53.012 [2024-11-09 16:33:12.557499] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.012 [2024-11-09 16:33:12.576133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.012 [2024-11-09 16:33:12.576160] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:53.013 [2024-11-09 16:33:12.576171] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.588 ms 00:23:53.013 [2024-11-09 16:33:12.576177] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.588240] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.013 [2024-11-09 16:33:12.588269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:53.013 [2024-11-09 16:33:12.588281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.032 ms 00:23:53.013 [2024-11-09 16:33:12.588287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.588402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.013 [2024-11-09 16:33:12.588410] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:53.013 [2024-11-09 16:33:12.588418] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:23:53.013 [2024-11-09 16:33:12.588424] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.606431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.013 [2024-11-09 16:33:12.606632] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:23:53.013 [2024-11-09 16:33:12.606647] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.990 ms 00:23:53.013 [2024-11-09 16:33:12.606652] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.624431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.013 [2024-11-09 16:33:12.624454] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:23:53.013 [2024-11-09 16:33:12.624463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.751 ms 00:23:53.013 [2024-11-09 16:33:12.624468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.641914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.013 [2024-11-09 16:33:12.642088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:53.013 [2024-11-09 16:33:12.642103] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.415 ms 00:23:53.013 [2024-11-09 16:33:12.642108] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.659167] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.013 [2024-11-09 16:33:12.659191] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:53.013 [2024-11-09 16:33:12.659201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.005 ms 00:23:53.013 [2024-11-09 16:33:12.659206] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.659244] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:53.013 [2024-11-09 16:33:12.659255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:53.013 [2024-11-09 16:33:12.659910] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:53.013 [2024-11-09 16:33:12.659917] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 902f49f9-b410-4954-8a62-8ab8809a921f 00:23:53.013 [2024-11-09 16:33:12.659924] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:53.013 [2024-11-09 16:33:12.659931] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:53.013 [2024-11-09 16:33:12.659936] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:53.013 [2024-11-09 16:33:12.659943] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:53.013 [2024-11-09 16:33:12.659948] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:53.013 [2024-11-09 16:33:12.659955] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:53.013 [2024-11-09 16:33:12.659961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:53.013 [2024-11-09 16:33:12.659967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:53.013 [2024-11-09 16:33:12.659972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:53.013 [2024-11-09 16:33:12.659979] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.013 [2024-11-09 16:33:12.659985] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:53.013 [2024-11-09 16:33:12.659992] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.737 ms 00:23:53.013 [2024-11-09 16:33:12.659998] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.669361] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.013 [2024-11-09 16:33:12.669387] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:53.013 [2024-11-09 16:33:12.669395] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.339 ms 00:23:53.013 [2024-11-09 16:33:12.669401] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.669546] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:53.013 [2024-11-09 16:33:12.669553] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:53.013 [2024-11-09 16:33:12.669560] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:23:53.013 [2024-11-09 16:33:12.669565] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.704501] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.013 [2024-11-09 16:33:12.704610] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:53.013 [2024-11-09 16:33:12.704626] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.013 [2024-11-09 16:33:12.704632] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.704678] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.013 [2024-11-09 16:33:12.704684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:53.013 [2024-11-09 16:33:12.704692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.013 [2024-11-09 16:33:12.704697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.704754] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.013 [2024-11-09 16:33:12.704762] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:53.013 [2024-11-09 16:33:12.704769] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.013 [2024-11-09 16:33:12.704774] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.704789] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.013 [2024-11-09 16:33:12.704795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:53.013 [2024-11-09 16:33:12.704802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.013 [2024-11-09 16:33:12.704807] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.013 [2024-11-09 16:33:12.762821] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.013 [2024-11-09 16:33:12.762855] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:53.013 [2024-11-09 16:33:12.762866] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.013 [2024-11-09 16:33:12.762872] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.272 [2024-11-09 16:33:12.785463] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.272 [2024-11-09 16:33:12.785488] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:53.272 [2024-11-09 16:33:12.785497] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.272 [2024-11-09 16:33:12.785503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.272 [2024-11-09 16:33:12.785552] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.272 [2024-11-09 16:33:12.785559] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:53.272 [2024-11-09 16:33:12.785567] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.272 [2024-11-09 16:33:12.785573] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.272 [2024-11-09 16:33:12.785607] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.272 [2024-11-09 16:33:12.785613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:53.272 [2024-11-09 16:33:12.785621] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.272 [2024-11-09 16:33:12.785626] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.272 [2024-11-09 16:33:12.785697] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.272 [2024-11-09 16:33:12.785707] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:53.272 [2024-11-09 16:33:12.785714] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.272 [2024-11-09 16:33:12.785720] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.272 [2024-11-09 16:33:12.785746] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.272 [2024-11-09 16:33:12.785753] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:53.272 [2024-11-09 16:33:12.785760] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.272 [2024-11-09 16:33:12.785766] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.272 [2024-11-09 16:33:12.785795] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.272 [2024-11-09 16:33:12.785803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:53.272 [2024-11-09 16:33:12.785810] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.272 [2024-11-09 16:33:12.785815] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.272 [2024-11-09 16:33:12.785850] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:53.272 [2024-11-09 16:33:12.785857] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:53.272 [2024-11-09 16:33:12.785864] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:53.272 [2024-11-09 16:33:12.785869] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:53.272 [2024-11-09 16:33:12.785975] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 249.942 ms, result 0 00:23:53.272 true 00:23:53.272 16:33:12 -- ftl/dirty_shutdown.sh@83 -- # kill -9 76179 00:23:53.272 16:33:12 -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76179 00:23:53.272 16:33:12 -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:53.272 [2024-11-09 16:33:12.877588] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:53.272 [2024-11-09 16:33:12.877862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76988 ] 00:23:53.272 [2024-11-09 16:33:13.028212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.531 [2024-11-09 16:33:13.181555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.905  [2024-11-09T16:33:15.609Z] Copying: 258/1024 [MB] (258 MBps) [2024-11-09T16:33:16.546Z] Copying: 519/1024 [MB] (261 MBps) [2024-11-09T16:33:17.484Z] Copying: 778/1024 [MB] (258 MBps) [2024-11-09T16:33:18.052Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:23:58.282 00:23:58.282 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76179 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:58.282 16:33:17 -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:58.282 [2024-11-09 16:33:17.981778] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:23:58.282 [2024-11-09 16:33:17.982016] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77046 ] 00:23:58.541 [2024-11-09 16:33:18.127918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:58.541 [2024-11-09 16:33:18.264996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.799 [2024-11-09 16:33:18.471450] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.799 [2024-11-09 16:33:18.471498] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:58.799 [2024-11-09 16:33:18.531266] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:58.799 [2024-11-09 16:33:18.531469] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:58.799 [2024-11-09 16:33:18.531698] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:59.059 [2024-11-09 16:33:18.760852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.059 [2024-11-09 16:33:18.760892] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:59.059 [2024-11-09 16:33:18.760902] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:59.059 [2024-11-09 16:33:18.760908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.059 [2024-11-09 16:33:18.760941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.060 [2024-11-09 16:33:18.760949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:59.060 [2024-11-09 16:33:18.760957] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:59.060 [2024-11-09 16:33:18.760962] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.060 [2024-11-09 16:33:18.760974] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:59.060 [2024-11-09 16:33:18.761548] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:59.060 [2024-11-09 16:33:18.761562] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.060 [2024-11-09 16:33:18.761569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:59.060 [2024-11-09 16:33:18.761576] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:23:59.060 [2024-11-09 16:33:18.761581] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.060 [2024-11-09 16:33:18.762509] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:59.060 [2024-11-09 16:33:18.772069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.060 [2024-11-09 16:33:18.772190] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:59.060 [2024-11-09 16:33:18.772204] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.562 ms 00:23:59.060 [2024-11-09 16:33:18.772210] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.060 [2024-11-09 16:33:18.772268] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.060 [2024-11-09 16:33:18.772277] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:59.060 [2024-11-09 16:33:18.772284] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:23:59.060 [2024-11-09 16:33:18.772289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.060 [2024-11-09 16:33:18.776617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.060 [2024-11-09 16:33:18.776641] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:59.060 [2024-11-09 16:33:18.776648] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.285 ms 00:23:59.060 [2024-11-09 16:33:18.776654] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.060 [2024-11-09 16:33:18.776715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.060 [2024-11-09 16:33:18.776722] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:59.060 [2024-11-09 16:33:18.776728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:59.060 [2024-11-09 16:33:18.776733] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.060 [2024-11-09 16:33:18.776765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.060 [2024-11-09 16:33:18.776772] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:59.060 [2024-11-09 16:33:18.776778] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:59.060 [2024-11-09 16:33:18.776783] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.060 [2024-11-09 16:33:18.776800] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:59.060 [2024-11-09 16:33:18.779567] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.060 [2024-11-09 16:33:18.779590] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:59.060 [2024-11-09 16:33:18.779598] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.774 ms 00:23:59.060 [2024-11-09 16:33:18.779603] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.060 [2024-11-09 16:33:18.779633] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.060 [2024-11-09 16:33:18.779640] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:59.060 [2024-11-09 16:33:18.779645] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:59.060 [2024-11-09 16:33:18.779651] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.060 [2024-11-09 16:33:18.779664] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:59.060 [2024-11-09 16:33:18.779678] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:23:59.060 [2024-11-09 16:33:18.779704] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:59.060 [2024-11-09 16:33:18.779716] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:23:59.060 [2024-11-09 16:33:18.779773] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:23:59.060 [2024-11-09 16:33:18.779781] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:59.060 [2024-11-09 16:33:18.779788] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:23:59.060 [2024-11-09 16:33:18.779796] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:59.060 [2024-11-09 16:33:18.779802] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:59.060 [2024-11-09 16:33:18.779808] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:59.060 [2024-11-09 16:33:18.779814] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:59.060 [2024-11-09 16:33:18.779820] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:23:59.060 [2024-11-09 16:33:18.779825] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:23:59.060 [2024-11-09 16:33:18.779833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.060 [2024-11-09 16:33:18.779838] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:59.060 [2024-11-09 16:33:18.779844] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.170 ms 00:23:59.060 [2024-11-09 16:33:18.779849] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.060 [2024-11-09 16:33:18.779894] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.060 [2024-11-09 16:33:18.779900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:59.060 [2024-11-09 16:33:18.779905] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:59.060 [2024-11-09 16:33:18.779910] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.060 [2024-11-09 16:33:18.779964] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:59.060 [2024-11-09 16:33:18.779970] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:59.060 [2024-11-09 16:33:18.779977] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.060 [2024-11-09 16:33:18.779984] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.060 [2024-11-09 16:33:18.779989] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:59.060 [2024-11-09 16:33:18.779994] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:59.060 [2024-11-09 16:33:18.779998] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:59.060 [2024-11-09 16:33:18.780004] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:59.060 [2024-11-09 16:33:18.780009] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:59.060 [2024-11-09 16:33:18.780013] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.060 [2024-11-09 16:33:18.780018] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:59.060 [2024-11-09 16:33:18.780023] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:59.060 [2024-11-09 16:33:18.780031] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:59.060 [2024-11-09 16:33:18.780038] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:59.060 [2024-11-09 16:33:18.780044] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:23:59.060 [2024-11-09 16:33:18.780048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.060 [2024-11-09 16:33:18.780053] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:59.060 [2024-11-09 16:33:18.780058] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:23:59.060 [2024-11-09 16:33:18.780062] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.060 [2024-11-09 16:33:18.780067] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:23:59.060 [2024-11-09 16:33:18.780072] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:23:59.060 [2024-11-09 16:33:18.780076] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:23:59.060 [2024-11-09 16:33:18.780081] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:59.060 [2024-11-09 16:33:18.780086] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:59.060 [2024-11-09 16:33:18.780091] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:59.060 [2024-11-09 16:33:18.780096] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:59.060 [2024-11-09 16:33:18.780100] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:23:59.060 [2024-11-09 16:33:18.780105] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:59.060 [2024-11-09 16:33:18.780109] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:59.060 [2024-11-09 16:33:18.780114] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:59.060 [2024-11-09 16:33:18.780119] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:59.060 [2024-11-09 16:33:18.780123] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:59.060 [2024-11-09 16:33:18.780128] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:23:59.060 [2024-11-09 16:33:18.780133] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:23:59.060 [2024-11-09 16:33:18.780138] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:59.060 [2024-11-09 16:33:18.780142] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:59.060 [2024-11-09 16:33:18.780147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.060 [2024-11-09 16:33:18.780152] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:59.060 [2024-11-09 16:33:18.780157] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:23:59.060 [2024-11-09 16:33:18.780161] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:59.060 [2024-11-09 16:33:18.780166] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:59.060 [2024-11-09 16:33:18.780171] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:59.060 [2024-11-09 16:33:18.780177] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:59.060 [2024-11-09 16:33:18.780182] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:59.060 [2024-11-09 16:33:18.780188] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:59.060 [2024-11-09 16:33:18.780193] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:59.061 [2024-11-09 16:33:18.780198] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:59.061 [2024-11-09 16:33:18.780204] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:59.061 [2024-11-09 16:33:18.780208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:59.061 [2024-11-09 16:33:18.780213] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:59.061 [2024-11-09 16:33:18.780219] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:59.061 [2024-11-09 16:33:18.780236] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.061 [2024-11-09 16:33:18.780242] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:59.061 [2024-11-09 16:33:18.780248] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:23:59.061 [2024-11-09 16:33:18.780253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:23:59.061 [2024-11-09 16:33:18.780258] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:23:59.061 [2024-11-09 16:33:18.780264] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:23:59.061 [2024-11-09 16:33:18.780269] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:23:59.061 [2024-11-09 16:33:18.780275] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:23:59.061 [2024-11-09 16:33:18.780280] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:23:59.061 [2024-11-09 16:33:18.780285] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:23:59.061 [2024-11-09 16:33:18.780290] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:23:59.061 [2024-11-09 16:33:18.780296] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:23:59.061 [2024-11-09 16:33:18.780301] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:23:59.061 [2024-11-09 16:33:18.780307] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:23:59.061 [2024-11-09 16:33:18.780312] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:59.061 [2024-11-09 16:33:18.780319] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:59.061 [2024-11-09 16:33:18.780327] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:59.061 [2024-11-09 16:33:18.780332] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:59.061 [2024-11-09 16:33:18.780345] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:59.061 [2024-11-09 16:33:18.780350] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:59.061 [2024-11-09 16:33:18.780356] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.061 [2024-11-09 16:33:18.780362] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:59.061 [2024-11-09 16:33:18.780367] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:23:59.061 [2024-11-09 16:33:18.780372] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.061 [2024-11-09 16:33:18.792216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.061 [2024-11-09 16:33:18.792254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:59.061 [2024-11-09 16:33:18.792262] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.818 ms 00:23:59.061 [2024-11-09 16:33:18.792268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.061 [2024-11-09 16:33:18.792331] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.061 [2024-11-09 16:33:18.792338] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:59.061 [2024-11-09 16:33:18.792343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:23:59.061 [2024-11-09 16:33:18.792350] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.320 [2024-11-09 16:33:18.850914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.320 [2024-11-09 16:33:18.850948] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:59.320 [2024-11-09 16:33:18.850960] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.529 ms 00:23:59.320 [2024-11-09 16:33:18.850966] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.320 [2024-11-09 16:33:18.850995] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.320 [2024-11-09 16:33:18.851003] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:59.320 [2024-11-09 16:33:18.851011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:23:59.320 [2024-11-09 16:33:18.851020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.320 [2024-11-09 16:33:18.851350] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.320 [2024-11-09 16:33:18.851363] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:59.320 [2024-11-09 16:33:18.851370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:23:59.320 [2024-11-09 16:33:18.851376] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.320 [2024-11-09 16:33:18.851466] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.320 [2024-11-09 16:33:18.851473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:59.320 [2024-11-09 16:33:18.851478] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:23:59.320 [2024-11-09 16:33:18.851484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.320 [2024-11-09 16:33:18.862480] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.862505] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:59.321 [2024-11-09 16:33:18.862513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.980 ms 00:23:59.321 [2024-11-09 16:33:18.862518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.872188] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:59.321 [2024-11-09 16:33:18.872217] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:59.321 [2024-11-09 16:33:18.872236] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.872242] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:59.321 [2024-11-09 16:33:18.872248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.644 ms 00:23:59.321 [2024-11-09 16:33:18.872254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.890539] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.890569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:59.321 [2024-11-09 16:33:18.890581] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.253 ms 00:23:59.321 [2024-11-09 16:33:18.890587] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.899715] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.899742] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:59.321 [2024-11-09 16:33:18.899749] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.097 ms 00:23:59.321 [2024-11-09 16:33:18.899761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.908216] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.908246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:59.321 [2024-11-09 16:33:18.908254] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.428 ms 00:23:59.321 [2024-11-09 16:33:18.908260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.908534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.908543] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:59.321 [2024-11-09 16:33:18.908549] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:23:59.321 [2024-11-09 16:33:18.908555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.954346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.954385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:59.321 [2024-11-09 16:33:18.954396] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.776 ms 00:23:59.321 [2024-11-09 16:33:18.954402] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.962710] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:59.321 [2024-11-09 16:33:18.964776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.964801] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:59.321 [2024-11-09 16:33:18.964811] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.331 ms 00:23:59.321 [2024-11-09 16:33:18.964817] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.964880] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.964888] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:59.321 [2024-11-09 16:33:18.964895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:59.321 [2024-11-09 16:33:18.964900] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.964951] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.964958] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:59.321 [2024-11-09 16:33:18.964964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:59.321 [2024-11-09 16:33:18.964970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.965883] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.965909] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:23:59.321 [2024-11-09 16:33:18.965916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.900 ms 00:23:59.321 [2024-11-09 16:33:18.965925] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.965950] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.965957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:59.321 [2024-11-09 16:33:18.965964] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:59.321 [2024-11-09 16:33:18.965970] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.965995] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:59.321 [2024-11-09 16:33:18.966003] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.966008] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:59.321 [2024-11-09 16:33:18.966015] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:59.321 [2024-11-09 16:33:18.966020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.984878] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.984911] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:59.321 [2024-11-09 16:33:18.984920] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.845 ms 00:23:59.321 [2024-11-09 16:33:18.984926] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.984981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:59.321 [2024-11-09 16:33:18.984988] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:59.321 [2024-11-09 16:33:18.984994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:59.321 [2024-11-09 16:33:18.985005] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:59.321 [2024-11-09 16:33:18.985806] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.622 ms, result 0 00:24:00.260  [2024-11-09T16:33:21.415Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-09T16:33:22.348Z] Copying: 33/1024 [MB] (12 MBps) [2024-11-09T16:33:23.286Z] Copying: 53/1024 [MB] (20 MBps) [2024-11-09T16:33:24.218Z] Copying: 78/1024 [MB] (24 MBps) [2024-11-09T16:33:25.151Z] Copying: 101/1024 [MB] (22 MBps) [2024-11-09T16:33:26.084Z] Copying: 136/1024 [MB] (35 MBps) [2024-11-09T16:33:27.019Z] Copying: 158/1024 [MB] (21 MBps) [2024-11-09T16:33:28.405Z] Copying: 190/1024 [MB] (32 MBps) [2024-11-09T16:33:29.338Z] Copying: 203/1024 [MB] (12 MBps) [2024-11-09T16:33:30.270Z] Copying: 222/1024 [MB] (18 MBps) [2024-11-09T16:33:31.238Z] Copying: 244/1024 [MB] (21 MBps) [2024-11-09T16:33:32.178Z] Copying: 265/1024 [MB] (21 MBps) [2024-11-09T16:33:33.117Z] Copying: 296/1024 [MB] (30 MBps) [2024-11-09T16:33:34.054Z] Copying: 324/1024 [MB] (27 MBps) [2024-11-09T16:33:35.427Z] Copying: 340/1024 [MB] (16 MBps) [2024-11-09T16:33:36.358Z] Copying: 375/1024 [MB] (34 MBps) [2024-11-09T16:33:37.291Z] Copying: 396/1024 [MB] (21 MBps) [2024-11-09T16:33:38.225Z] Copying: 431/1024 [MB] (35 MBps) [2024-11-09T16:33:39.167Z] Copying: 468/1024 [MB] (36 MBps) [2024-11-09T16:33:40.108Z] Copying: 483/1024 [MB] (15 MBps) [2024-11-09T16:33:41.040Z] Copying: 493/1024 [MB] (10 MBps) [2024-11-09T16:33:42.410Z] Copying: 519/1024 [MB] (25 MBps) [2024-11-09T16:33:43.342Z] Copying: 542/1024 [MB] (23 MBps) [2024-11-09T16:33:44.276Z] Copying: 581/1024 [MB] (38 MBps) [2024-11-09T16:33:45.249Z] Copying: 619/1024 [MB] (38 MBps) [2024-11-09T16:33:46.187Z] Copying: 658/1024 [MB] (38 MBps) [2024-11-09T16:33:47.127Z] Copying: 672/1024 [MB] (14 MBps) [2024-11-09T16:33:48.072Z] Copying: 685/1024 [MB] (13 MBps) [2024-11-09T16:33:49.015Z] Copying: 697/1024 [MB] (12 MBps) [2024-11-09T16:33:50.395Z] Copying: 710/1024 [MB] (13 MBps) [2024-11-09T16:33:51.335Z] Copying: 727/1024 [MB] (16 MBps) [2024-11-09T16:33:52.279Z] Copying: 740/1024 [MB] (13 MBps) [2024-11-09T16:33:53.221Z] Copying: 754/1024 [MB] (13 MBps) [2024-11-09T16:33:54.164Z] Copying: 770/1024 [MB] (15 MBps) [2024-11-09T16:33:55.109Z] Copying: 783/1024 [MB] (13 MBps) [2024-11-09T16:33:56.054Z] Copying: 797/1024 [MB] (13 MBps) [2024-11-09T16:33:57.440Z] Copying: 812/1024 [MB] (15 MBps) [2024-11-09T16:33:58.012Z] Copying: 825/1024 [MB] (12 MBps) [2024-11-09T16:33:59.399Z] Copying: 841/1024 [MB] (16 MBps) [2024-11-09T16:34:00.032Z] Copying: 855/1024 [MB] (13 MBps) [2024-11-09T16:34:01.414Z] Copying: 868/1024 [MB] (12 MBps) [2024-11-09T16:34:02.358Z] Copying: 891/1024 [MB] (23 MBps) [2024-11-09T16:34:03.295Z] Copying: 915/1024 [MB] (23 MBps) [2024-11-09T16:34:04.235Z] Copying: 939/1024 [MB] (23 MBps) [2024-11-09T16:34:05.176Z] Copying: 970/1024 [MB] (31 MBps) [2024-11-09T16:34:06.119Z] Copying: 991/1024 [MB] (21 MBps) [2024-11-09T16:34:07.061Z] Copying: 1016/1024 [MB] (25 MBps) [2024-11-09T16:34:07.322Z] Copying: 1048236/1048576 [kB] (6892 kBps) [2024-11-09T16:34:07.322Z] Copying: 1024/1024 [MB] (average 21 MBps)[2024-11-09 16:34:07.311028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.552 [2024-11-09 16:34:07.311385] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:47.552 [2024-11-09 16:34:07.311420] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:47.552 [2024-11-09 16:34:07.311431] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.552 [2024-11-09 16:34:07.315911] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:47.552 [2024-11-09 16:34:07.319987] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.552 [2024-11-09 16:34:07.320036] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:47.552 [2024-11-09 16:34:07.320058] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.013 ms 00:24:47.552 [2024-11-09 16:34:07.320067] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.814 [2024-11-09 16:34:07.332526] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.814 [2024-11-09 16:34:07.332575] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:47.814 [2024-11-09 16:34:07.332589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.152 ms 00:24:47.814 [2024-11-09 16:34:07.332597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.814 [2024-11-09 16:34:07.356833] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.814 [2024-11-09 16:34:07.356883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:47.814 [2024-11-09 16:34:07.356895] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.216 ms 00:24:47.814 [2024-11-09 16:34:07.356904] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.814 [2024-11-09 16:34:07.363081] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.814 [2024-11-09 16:34:07.363297] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:24:47.814 [2024-11-09 16:34:07.363319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.096 ms 00:24:47.814 [2024-11-09 16:34:07.363330] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.814 [2024-11-09 16:34:07.391648] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.814 [2024-11-09 16:34:07.391699] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:47.814 [2024-11-09 16:34:07.391713] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.256 ms 00:24:47.814 [2024-11-09 16:34:07.391723] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:47.814 [2024-11-09 16:34:07.409142] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:47.814 [2024-11-09 16:34:07.409188] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:47.814 [2024-11-09 16:34:07.409203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.371 ms 00:24:47.814 [2024-11-09 16:34:07.409213] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.076 [2024-11-09 16:34:07.668848] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.076 [2024-11-09 16:34:07.668920] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:48.076 [2024-11-09 16:34:07.668933] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 259.566 ms 00:24:48.076 [2024-11-09 16:34:07.668944] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.076 [2024-11-09 16:34:07.695128] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.076 [2024-11-09 16:34:07.695355] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:24:48.076 [2024-11-09 16:34:07.695377] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.138 ms 00:24:48.076 [2024-11-09 16:34:07.695385] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.076 [2024-11-09 16:34:07.720791] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.076 [2024-11-09 16:34:07.720836] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:24:48.076 [2024-11-09 16:34:07.720849] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.367 ms 00:24:48.076 [2024-11-09 16:34:07.720857] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.076 [2024-11-09 16:34:07.745690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.076 [2024-11-09 16:34:07.745735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:48.076 [2024-11-09 16:34:07.745747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.788 ms 00:24:48.076 [2024-11-09 16:34:07.745755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.076 [2024-11-09 16:34:07.770799] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.076 [2024-11-09 16:34:07.770844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:48.076 [2024-11-09 16:34:07.770856] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.954 ms 00:24:48.076 [2024-11-09 16:34:07.770863] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.076 [2024-11-09 16:34:07.770909] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:48.076 [2024-11-09 16:34:07.770926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 95488 / 261120 wr_cnt: 1 state: open 00:24:48.076 [2024-11-09 16:34:07.770939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:48.076 [2024-11-09 16:34:07.770948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:48.076 [2024-11-09 16:34:07.770957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:48.076 [2024-11-09 16:34:07.770965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:48.076 [2024-11-09 16:34:07.770974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:48.076 [2024-11-09 16:34:07.770982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:48.076 [2024-11-09 16:34:07.770991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:48.076 [2024-11-09 16:34:07.771000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:48.076 [2024-11-09 16:34:07.771009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:48.076 [2024-11-09 16:34:07.771017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:48.077 [2024-11-09 16:34:07.771823] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:48.078 [2024-11-09 16:34:07.771834] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 902f49f9-b410-4954-8a62-8ab8809a921f 00:24:48.078 [2024-11-09 16:34:07.771843] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 95488 00:24:48.078 [2024-11-09 16:34:07.771851] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 96448 00:24:48.078 [2024-11-09 16:34:07.771860] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 95488 00:24:48.078 [2024-11-09 16:34:07.771879] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0101 00:24:48.078 [2024-11-09 16:34:07.771886] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:48.078 [2024-11-09 16:34:07.771895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:48.078 [2024-11-09 16:34:07.771904] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:48.078 [2024-11-09 16:34:07.771911] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:48.078 [2024-11-09 16:34:07.771917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:48.078 [2024-11-09 16:34:07.771924] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.078 [2024-11-09 16:34:07.771934] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:48.078 [2024-11-09 16:34:07.771942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.017 ms 00:24:48.078 [2024-11-09 16:34:07.771950] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.078 [2024-11-09 16:34:07.786354] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.078 [2024-11-09 16:34:07.786399] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:48.078 [2024-11-09 16:34:07.786410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.366 ms 00:24:48.078 [2024-11-09 16:34:07.786418] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.078 [2024-11-09 16:34:07.786663] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.078 [2024-11-09 16:34:07.786674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:48.078 [2024-11-09 16:34:07.786690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:24:48.078 [2024-11-09 16:34:07.786697] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.078 [2024-11-09 16:34:07.828666] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.078 [2024-11-09 16:34:07.828714] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:48.078 [2024-11-09 16:34:07.828726] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.078 [2024-11-09 16:34:07.828736] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.078 [2024-11-09 16:34:07.828800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.078 [2024-11-09 16:34:07.828810] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:48.078 [2024-11-09 16:34:07.828826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.078 [2024-11-09 16:34:07.828833] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.078 [2024-11-09 16:34:07.828914] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.078 [2024-11-09 16:34:07.828926] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:48.078 [2024-11-09 16:34:07.828935] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.078 [2024-11-09 16:34:07.828943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.078 [2024-11-09 16:34:07.828976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.078 [2024-11-09 16:34:07.828986] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:48.078 [2024-11-09 16:34:07.828994] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.078 [2024-11-09 16:34:07.829006] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.339 [2024-11-09 16:34:07.914622] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.339 [2024-11-09 16:34:07.914680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:48.339 [2024-11-09 16:34:07.914695] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.339 [2024-11-09 16:34:07.914704] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.339 [2024-11-09 16:34:07.948959] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.339 [2024-11-09 16:34:07.949288] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:48.339 [2024-11-09 16:34:07.949316] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.339 [2024-11-09 16:34:07.949326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.339 [2024-11-09 16:34:07.949412] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.339 [2024-11-09 16:34:07.949423] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:48.339 [2024-11-09 16:34:07.949433] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.339 [2024-11-09 16:34:07.949442] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.339 [2024-11-09 16:34:07.949489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.339 [2024-11-09 16:34:07.949500] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:48.339 [2024-11-09 16:34:07.949509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.339 [2024-11-09 16:34:07.949518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.339 [2024-11-09 16:34:07.949639] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.339 [2024-11-09 16:34:07.949652] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:48.339 [2024-11-09 16:34:07.949662] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.339 [2024-11-09 16:34:07.949670] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.339 [2024-11-09 16:34:07.949709] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.339 [2024-11-09 16:34:07.949719] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:48.339 [2024-11-09 16:34:07.949728] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.339 [2024-11-09 16:34:07.949737] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.339 [2024-11-09 16:34:07.949794] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.339 [2024-11-09 16:34:07.949804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:48.339 [2024-11-09 16:34:07.949814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.339 [2024-11-09 16:34:07.949822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.339 [2024-11-09 16:34:07.949882] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:48.339 [2024-11-09 16:34:07.949894] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:48.339 [2024-11-09 16:34:07.949903] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:48.339 [2024-11-09 16:34:07.949912] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.339 [2024-11-09 16:34:07.950076] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 640.463 ms, result 0 00:24:49.727 00:24:49.727 00:24:49.993 16:34:09 -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:51.907 16:34:11 -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:51.907 [2024-11-09 16:34:11.672113] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:24:51.907 [2024-11-09 16:34:11.672199] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77600 ] 00:24:52.167 [2024-11-09 16:34:11.817035] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.428 [2024-11-09 16:34:12.035188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:52.689 [2024-11-09 16:34:12.332340] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:52.689 [2024-11-09 16:34:12.332622] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:52.951 [2024-11-09 16:34:12.487852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.951 [2024-11-09 16:34:12.487897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:52.951 [2024-11-09 16:34:12.487911] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:52.951 [2024-11-09 16:34:12.487921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.951 [2024-11-09 16:34:12.487966] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.951 [2024-11-09 16:34:12.487976] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:52.951 [2024-11-09 16:34:12.487983] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:24:52.951 [2024-11-09 16:34:12.487991] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.951 [2024-11-09 16:34:12.488007] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:52.951 [2024-11-09 16:34:12.488999] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:52.951 [2024-11-09 16:34:12.489038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.951 [2024-11-09 16:34:12.489047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:52.951 [2024-11-09 16:34:12.489057] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.035 ms 00:24:52.951 [2024-11-09 16:34:12.489064] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.952 [2024-11-09 16:34:12.490201] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:52.952 [2024-11-09 16:34:12.503059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.952 [2024-11-09 16:34:12.503094] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:52.952 [2024-11-09 16:34:12.503106] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.859 ms 00:24:52.952 [2024-11-09 16:34:12.503115] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.952 [2024-11-09 16:34:12.503171] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.952 [2024-11-09 16:34:12.503180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:52.952 [2024-11-09 16:34:12.503188] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:52.952 [2024-11-09 16:34:12.503195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.952 [2024-11-09 16:34:12.508423] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.952 [2024-11-09 16:34:12.508453] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:52.952 [2024-11-09 16:34:12.508462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.161 ms 00:24:52.952 [2024-11-09 16:34:12.508470] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.952 [2024-11-09 16:34:12.508554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.952 [2024-11-09 16:34:12.508564] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:52.952 [2024-11-09 16:34:12.508572] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:24:52.952 [2024-11-09 16:34:12.508579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.952 [2024-11-09 16:34:12.508621] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.952 [2024-11-09 16:34:12.508630] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:52.952 [2024-11-09 16:34:12.508638] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:52.952 [2024-11-09 16:34:12.508645] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.952 [2024-11-09 16:34:12.508672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:52.952 [2024-11-09 16:34:12.512214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.952 [2024-11-09 16:34:12.512250] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:52.952 [2024-11-09 16:34:12.512260] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.554 ms 00:24:52.952 [2024-11-09 16:34:12.512267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.952 [2024-11-09 16:34:12.512297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.952 [2024-11-09 16:34:12.512305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:52.952 [2024-11-09 16:34:12.512313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:52.952 [2024-11-09 16:34:12.512322] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.952 [2024-11-09 16:34:12.512342] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:52.952 [2024-11-09 16:34:12.512359] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:24:52.952 [2024-11-09 16:34:12.512391] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:52.952 [2024-11-09 16:34:12.512405] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:24:52.952 [2024-11-09 16:34:12.512478] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:24:52.952 [2024-11-09 16:34:12.512525] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:52.952 [2024-11-09 16:34:12.512537] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:24:52.952 [2024-11-09 16:34:12.512547] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:52.952 [2024-11-09 16:34:12.512556] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:52.952 [2024-11-09 16:34:12.512563] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:52.952 [2024-11-09 16:34:12.512570] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:52.952 [2024-11-09 16:34:12.512577] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:24:52.952 [2024-11-09 16:34:12.512584] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:24:52.952 [2024-11-09 16:34:12.512592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.952 [2024-11-09 16:34:12.512599] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:52.952 [2024-11-09 16:34:12.512606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:24:52.952 [2024-11-09 16:34:12.512613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.952 [2024-11-09 16:34:12.512674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.952 [2024-11-09 16:34:12.512682] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:52.952 [2024-11-09 16:34:12.512689] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:24:52.952 [2024-11-09 16:34:12.512696] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.952 [2024-11-09 16:34:12.512775] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:52.952 [2024-11-09 16:34:12.512785] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:52.952 [2024-11-09 16:34:12.512793] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:52.952 [2024-11-09 16:34:12.512800] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.952 [2024-11-09 16:34:12.512808] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:52.952 [2024-11-09 16:34:12.512815] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:52.952 [2024-11-09 16:34:12.512821] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:52.952 [2024-11-09 16:34:12.512827] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:52.952 [2024-11-09 16:34:12.512834] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:52.952 [2024-11-09 16:34:12.512841] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:52.952 [2024-11-09 16:34:12.512847] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:52.952 [2024-11-09 16:34:12.512853] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:52.952 [2024-11-09 16:34:12.512860] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:52.952 [2024-11-09 16:34:12.512867] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:52.952 [2024-11-09 16:34:12.512874] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:24:52.952 [2024-11-09 16:34:12.512880] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.952 [2024-11-09 16:34:12.512892] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:52.952 [2024-11-09 16:34:12.512898] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:24:52.952 [2024-11-09 16:34:12.512905] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.952 [2024-11-09 16:34:12.512911] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:24:52.952 [2024-11-09 16:34:12.512918] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:24:52.952 [2024-11-09 16:34:12.512924] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:24:52.952 [2024-11-09 16:34:12.512931] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:52.952 [2024-11-09 16:34:12.512937] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:52.952 [2024-11-09 16:34:12.512943] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:52.952 [2024-11-09 16:34:12.512949] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:52.952 [2024-11-09 16:34:12.512955] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:24:52.952 [2024-11-09 16:34:12.512973] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:52.952 [2024-11-09 16:34:12.512980] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:52.952 [2024-11-09 16:34:12.512986] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:52.952 [2024-11-09 16:34:12.512992] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:52.952 [2024-11-09 16:34:12.512999] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:52.952 [2024-11-09 16:34:12.513005] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:24:52.952 [2024-11-09 16:34:12.513011] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:24:52.952 [2024-11-09 16:34:12.513017] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:52.952 [2024-11-09 16:34:12.513024] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:52.952 [2024-11-09 16:34:12.513030] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:52.952 [2024-11-09 16:34:12.513036] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:52.952 [2024-11-09 16:34:12.513042] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:24:52.952 [2024-11-09 16:34:12.513049] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:52.952 [2024-11-09 16:34:12.513055] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:52.952 [2024-11-09 16:34:12.513064] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:52.952 [2024-11-09 16:34:12.513071] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:52.952 [2024-11-09 16:34:12.513077] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:52.952 [2024-11-09 16:34:12.513084] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:52.952 [2024-11-09 16:34:12.513092] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:52.952 [2024-11-09 16:34:12.513099] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:52.952 [2024-11-09 16:34:12.513105] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:52.952 [2024-11-09 16:34:12.513112] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:52.952 [2024-11-09 16:34:12.513118] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:52.952 [2024-11-09 16:34:12.513126] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:52.952 [2024-11-09 16:34:12.513135] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:52.952 [2024-11-09 16:34:12.513143] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:52.953 [2024-11-09 16:34:12.513150] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:24:52.953 [2024-11-09 16:34:12.513158] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:24:52.953 [2024-11-09 16:34:12.513164] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:24:52.953 [2024-11-09 16:34:12.513171] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:24:52.953 [2024-11-09 16:34:12.513178] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:24:52.953 [2024-11-09 16:34:12.513185] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:24:52.953 [2024-11-09 16:34:12.513192] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:24:52.953 [2024-11-09 16:34:12.513198] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:24:52.953 [2024-11-09 16:34:12.513205] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:24:52.953 [2024-11-09 16:34:12.513212] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:24:52.953 [2024-11-09 16:34:12.513219] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:24:52.953 [2024-11-09 16:34:12.513247] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:24:52.953 [2024-11-09 16:34:12.513254] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:52.953 [2024-11-09 16:34:12.513262] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:52.953 [2024-11-09 16:34:12.513270] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:52.953 [2024-11-09 16:34:12.513277] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:52.953 [2024-11-09 16:34:12.513284] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:52.953 [2024-11-09 16:34:12.513290] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:52.953 [2024-11-09 16:34:12.513298] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.513306] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:52.953 [2024-11-09 16:34:12.513313] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.566 ms 00:24:52.953 [2024-11-09 16:34:12.513320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.953 [2024-11-09 16:34:12.528580] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.528613] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:52.953 [2024-11-09 16:34:12.528623] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.212 ms 00:24:52.953 [2024-11-09 16:34:12.528634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.953 [2024-11-09 16:34:12.528716] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.528724] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:52.953 [2024-11-09 16:34:12.528731] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:52.953 [2024-11-09 16:34:12.528738] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.953 [2024-11-09 16:34:12.574674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.574720] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:52.953 [2024-11-09 16:34:12.574732] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.893 ms 00:24:52.953 [2024-11-09 16:34:12.574740] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.953 [2024-11-09 16:34:12.574784] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.574794] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:52.953 [2024-11-09 16:34:12.574802] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:52.953 [2024-11-09 16:34:12.574810] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.953 [2024-11-09 16:34:12.575264] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.575283] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:52.953 [2024-11-09 16:34:12.575293] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:24:52.953 [2024-11-09 16:34:12.575305] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.953 [2024-11-09 16:34:12.575431] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.575441] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:52.953 [2024-11-09 16:34:12.575449] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:24:52.953 [2024-11-09 16:34:12.575456] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.953 [2024-11-09 16:34:12.590568] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.590732] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:52.953 [2024-11-09 16:34:12.590750] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.091 ms 00:24:52.953 [2024-11-09 16:34:12.590758] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.953 [2024-11-09 16:34:12.604738] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:52.953 [2024-11-09 16:34:12.604785] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:52.953 [2024-11-09 16:34:12.604797] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.604806] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:52.953 [2024-11-09 16:34:12.604815] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.939 ms 00:24:52.953 [2024-11-09 16:34:12.604823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.953 [2024-11-09 16:34:12.630230] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.630281] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:52.953 [2024-11-09 16:34:12.630292] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.351 ms 00:24:52.953 [2024-11-09 16:34:12.630300] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.953 [2024-11-09 16:34:12.643351] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.643401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:52.953 [2024-11-09 16:34:12.643413] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.990 ms 00:24:52.953 [2024-11-09 16:34:12.643421] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.953 [2024-11-09 16:34:12.656278] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.656473] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:52.953 [2024-11-09 16:34:12.656507] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.801 ms 00:24:52.953 [2024-11-09 16:34:12.656514] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:52.953 [2024-11-09 16:34:12.656901] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:52.953 [2024-11-09 16:34:12.656915] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:52.953 [2024-11-09 16:34:12.656924] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:24:52.953 [2024-11-09 16:34:12.656931] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.215 [2024-11-09 16:34:12.725342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.215 [2024-11-09 16:34:12.725411] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:53.215 [2024-11-09 16:34:12.725427] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.391 ms 00:24:53.215 [2024-11-09 16:34:12.725436] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.215 [2024-11-09 16:34:12.737332] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:53.215 [2024-11-09 16:34:12.740524] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.215 [2024-11-09 16:34:12.740569] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:53.215 [2024-11-09 16:34:12.740582] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.023 ms 00:24:53.215 [2024-11-09 16:34:12.740597] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.215 [2024-11-09 16:34:12.740674] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.215 [2024-11-09 16:34:12.740685] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:53.215 [2024-11-09 16:34:12.740694] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:53.215 [2024-11-09 16:34:12.740702] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.215 [2024-11-09 16:34:12.742149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.215 [2024-11-09 16:34:12.742202] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:53.215 [2024-11-09 16:34:12.742214] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.407 ms 00:24:53.215 [2024-11-09 16:34:12.742240] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.215 [2024-11-09 16:34:12.743618] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.215 [2024-11-09 16:34:12.743800] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:24:53.215 [2024-11-09 16:34:12.743823] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.342 ms 00:24:53.215 [2024-11-09 16:34:12.743832] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.215 [2024-11-09 16:34:12.743874] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.215 [2024-11-09 16:34:12.743883] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:53.215 [2024-11-09 16:34:12.743900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:53.215 [2024-11-09 16:34:12.743908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.215 [2024-11-09 16:34:12.743947] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:53.215 [2024-11-09 16:34:12.743957] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.215 [2024-11-09 16:34:12.743969] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:53.215 [2024-11-09 16:34:12.743977] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:53.215 [2024-11-09 16:34:12.743985] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.215 [2024-11-09 16:34:12.770464] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.215 [2024-11-09 16:34:12.770651] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:53.215 [2024-11-09 16:34:12.770674] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.460 ms 00:24:53.215 [2024-11-09 16:34:12.770684] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.215 [2024-11-09 16:34:12.770768] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:53.215 [2024-11-09 16:34:12.770777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:53.215 [2024-11-09 16:34:12.770786] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:53.215 [2024-11-09 16:34:12.770794] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:53.215 [2024-11-09 16:34:12.778030] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 288.941 ms, result 0 00:24:54.601  [2024-11-09T16:34:15.316Z] Copying: 1128/1048576 [kB] (1128 kBps) [2024-11-09T16:34:16.262Z] Copying: 4444/1048576 [kB] (3316 kBps) [2024-11-09T16:34:17.207Z] Copying: 21/1024 [MB] (17 MBps) [2024-11-09T16:34:18.150Z] Copying: 46/1024 [MB] (24 MBps) [2024-11-09T16:34:19.090Z] Copying: 65/1024 [MB] (19 MBps) [2024-11-09T16:34:20.032Z] Copying: 99/1024 [MB] (34 MBps) [2024-11-09T16:34:20.971Z] Copying: 126/1024 [MB] (26 MBps) [2024-11-09T16:34:22.352Z] Copying: 152/1024 [MB] (26 MBps) [2024-11-09T16:34:23.289Z] Copying: 182/1024 [MB] (29 MBps) [2024-11-09T16:34:24.234Z] Copying: 221/1024 [MB] (38 MBps) [2024-11-09T16:34:25.178Z] Copying: 248/1024 [MB] (26 MBps) [2024-11-09T16:34:26.124Z] Copying: 264/1024 [MB] (16 MBps) [2024-11-09T16:34:27.066Z] Copying: 290/1024 [MB] (26 MBps) [2024-11-09T16:34:28.050Z] Copying: 312/1024 [MB] (21 MBps) [2024-11-09T16:34:29.009Z] Copying: 341/1024 [MB] (28 MBps) [2024-11-09T16:34:30.390Z] Copying: 366/1024 [MB] (25 MBps) [2024-11-09T16:34:30.962Z] Copying: 397/1024 [MB] (30 MBps) [2024-11-09T16:34:32.347Z] Copying: 425/1024 [MB] (28 MBps) [2024-11-09T16:34:33.290Z] Copying: 452/1024 [MB] (27 MBps) [2024-11-09T16:34:34.231Z] Copying: 483/1024 [MB] (30 MBps) [2024-11-09T16:34:35.172Z] Copying: 502/1024 [MB] (19 MBps) [2024-11-09T16:34:36.112Z] Copying: 536/1024 [MB] (34 MBps) [2024-11-09T16:34:37.051Z] Copying: 561/1024 [MB] (25 MBps) [2024-11-09T16:34:37.992Z] Copying: 591/1024 [MB] (29 MBps) [2024-11-09T16:34:39.381Z] Copying: 622/1024 [MB] (31 MBps) [2024-11-09T16:34:40.325Z] Copying: 650/1024 [MB] (27 MBps) [2024-11-09T16:34:41.270Z] Copying: 678/1024 [MB] (27 MBps) [2024-11-09T16:34:42.214Z] Copying: 701/1024 [MB] (22 MBps) [2024-11-09T16:34:43.165Z] Copying: 720/1024 [MB] (19 MBps) [2024-11-09T16:34:44.107Z] Copying: 737/1024 [MB] (17 MBps) [2024-11-09T16:34:45.049Z] Copying: 753/1024 [MB] (15 MBps) [2024-11-09T16:34:45.995Z] Copying: 769/1024 [MB] (15 MBps) [2024-11-09T16:34:47.384Z] Copying: 785/1024 [MB] (16 MBps) [2024-11-09T16:34:48.330Z] Copying: 807/1024 [MB] (22 MBps) [2024-11-09T16:34:49.274Z] Copying: 827/1024 [MB] (20 MBps) [2024-11-09T16:34:50.218Z] Copying: 844/1024 [MB] (16 MBps) [2024-11-09T16:34:51.165Z] Copying: 871/1024 [MB] (27 MBps) [2024-11-09T16:34:52.111Z] Copying: 888/1024 [MB] (16 MBps) [2024-11-09T16:34:53.057Z] Copying: 904/1024 [MB] (16 MBps) [2024-11-09T16:34:54.002Z] Copying: 921/1024 [MB] (16 MBps) [2024-11-09T16:34:55.385Z] Copying: 938/1024 [MB] (16 MBps) [2024-11-09T16:34:56.329Z] Copying: 965/1024 [MB] (27 MBps) [2024-11-09T16:34:57.301Z] Copying: 990/1024 [MB] (24 MBps) [2024-11-09T16:34:57.560Z] Copying: 1012/1024 [MB] (21 MBps) [2024-11-09T16:34:57.823Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-09 16:34:57.603704] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.053 [2024-11-09 16:34:57.603804] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:38.053 [2024-11-09 16:34:57.603826] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:38.053 [2024-11-09 16:34:57.603840] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.053 [2024-11-09 16:34:57.603875] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:38.053 [2024-11-09 16:34:57.608251] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.053 [2024-11-09 16:34:57.608303] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:38.053 [2024-11-09 16:34:57.608319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.353 ms 00:25:38.053 [2024-11-09 16:34:57.608331] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.053 [2024-11-09 16:34:57.608705] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.053 [2024-11-09 16:34:57.608730] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:38.053 [2024-11-09 16:34:57.608744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.330 ms 00:25:38.053 [2024-11-09 16:34:57.608755] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.053 [2024-11-09 16:34:57.625281] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.053 [2024-11-09 16:34:57.625333] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:38.053 [2024-11-09 16:34:57.625346] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.502 ms 00:25:38.053 [2024-11-09 16:34:57.625354] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.053 [2024-11-09 16:34:57.631534] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.053 [2024-11-09 16:34:57.631581] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:25:38.053 [2024-11-09 16:34:57.631592] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.139 ms 00:25:38.053 [2024-11-09 16:34:57.631600] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.053 [2024-11-09 16:34:57.658297] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.053 [2024-11-09 16:34:57.658343] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:38.053 [2024-11-09 16:34:57.658356] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.643 ms 00:25:38.053 [2024-11-09 16:34:57.658364] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.053 [2024-11-09 16:34:57.673972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.053 [2024-11-09 16:34:57.674019] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:38.053 [2024-11-09 16:34:57.674031] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.563 ms 00:25:38.053 [2024-11-09 16:34:57.674039] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.053 [2024-11-09 16:34:57.682513] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.053 [2024-11-09 16:34:57.682556] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:38.053 [2024-11-09 16:34:57.682575] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.423 ms 00:25:38.053 [2024-11-09 16:34:57.682583] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.053 [2024-11-09 16:34:57.708635] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.053 [2024-11-09 16:34:57.708680] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:25:38.053 [2024-11-09 16:34:57.708692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.037 ms 00:25:38.053 [2024-11-09 16:34:57.708699] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.053 [2024-11-09 16:34:57.734478] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.053 [2024-11-09 16:34:57.734520] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:25:38.053 [2024-11-09 16:34:57.734531] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.734 ms 00:25:38.053 [2024-11-09 16:34:57.734550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.053 [2024-11-09 16:34:57.759462] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.053 [2024-11-09 16:34:57.759663] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:38.053 [2024-11-09 16:34:57.759685] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.867 ms 00:25:38.053 [2024-11-09 16:34:57.759693] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.053 [2024-11-09 16:34:57.784566] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.053 [2024-11-09 16:34:57.784614] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:38.053 [2024-11-09 16:34:57.784627] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.686 ms 00:25:38.053 [2024-11-09 16:34:57.784634] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.053 [2024-11-09 16:34:57.784678] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:38.053 [2024-11-09 16:34:57.784694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:38.053 [2024-11-09 16:34:57.784705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3072 / 261120 wr_cnt: 1 state: open 00:25:38.053 [2024-11-09 16:34:57.784715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:38.053 [2024-11-09 16:34:57.784893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.784901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.784908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.784915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.784923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.784948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.784957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.784965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.784973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.784981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.784988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.784997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:38.054 [2024-11-09 16:34:57.785539] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:38.054 [2024-11-09 16:34:57.785548] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 902f49f9-b410-4954-8a62-8ab8809a921f 00:25:38.054 [2024-11-09 16:34:57.785555] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264192 00:25:38.054 [2024-11-09 16:34:57.785570] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 170688 00:25:38.054 [2024-11-09 16:34:57.785577] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 168704 00:25:38.054 [2024-11-09 16:34:57.785586] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0118 00:25:38.054 [2024-11-09 16:34:57.785593] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:38.054 [2024-11-09 16:34:57.785602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:38.054 [2024-11-09 16:34:57.785610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:38.054 [2024-11-09 16:34:57.785616] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:38.054 [2024-11-09 16:34:57.785629] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:38.054 [2024-11-09 16:34:57.785637] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.054 [2024-11-09 16:34:57.785645] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:38.054 [2024-11-09 16:34:57.785654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:25:38.054 [2024-11-09 16:34:57.785662] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.054 [2024-11-09 16:34:57.799497] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.054 [2024-11-09 16:34:57.799539] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:38.055 [2024-11-09 16:34:57.799550] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.800 ms 00:25:38.055 [2024-11-09 16:34:57.799558] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.055 [2024-11-09 16:34:57.799788] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.055 [2024-11-09 16:34:57.799798] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:38.055 [2024-11-09 16:34:57.799806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:25:38.055 [2024-11-09 16:34:57.799820] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.838813] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.316 [2024-11-09 16:34:57.839001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:38.316 [2024-11-09 16:34:57.839022] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.316 [2024-11-09 16:34:57.839030] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.839092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.316 [2024-11-09 16:34:57.839101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:38.316 [2024-11-09 16:34:57.839110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.316 [2024-11-09 16:34:57.839124] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.839200] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.316 [2024-11-09 16:34:57.839211] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:38.316 [2024-11-09 16:34:57.839219] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.316 [2024-11-09 16:34:57.839251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.839267] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.316 [2024-11-09 16:34:57.839274] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:38.316 [2024-11-09 16:34:57.839282] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.316 [2024-11-09 16:34:57.839290] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.920775] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.316 [2024-11-09 16:34:57.920825] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:38.316 [2024-11-09 16:34:57.920837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.316 [2024-11-09 16:34:57.920844] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.952846] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.316 [2024-11-09 16:34:57.953045] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:38.316 [2024-11-09 16:34:57.953065] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.316 [2024-11-09 16:34:57.953075] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.953154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.316 [2024-11-09 16:34:57.953164] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:38.316 [2024-11-09 16:34:57.953173] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.316 [2024-11-09 16:34:57.953181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.953246] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.316 [2024-11-09 16:34:57.953258] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:38.316 [2024-11-09 16:34:57.953266] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.316 [2024-11-09 16:34:57.953274] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.953381] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.316 [2024-11-09 16:34:57.953395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:38.316 [2024-11-09 16:34:57.953404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.316 [2024-11-09 16:34:57.953412] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.953447] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.316 [2024-11-09 16:34:57.953455] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:38.316 [2024-11-09 16:34:57.953463] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.316 [2024-11-09 16:34:57.953472] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.953510] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.316 [2024-11-09 16:34:57.953522] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:38.316 [2024-11-09 16:34:57.953530] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.316 [2024-11-09 16:34:57.953538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.953586] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:38.316 [2024-11-09 16:34:57.953597] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:38.316 [2024-11-09 16:34:57.953606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:38.316 [2024-11-09 16:34:57.953613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.316 [2024-11-09 16:34:57.953746] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 350.019 ms, result 0 00:25:39.259 00:25:39.259 00:25:39.259 16:34:58 -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:41.808 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:41.808 16:35:01 -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:41.808 [2024-11-09 16:35:01.128180] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:25:41.808 [2024-11-09 16:35:01.128291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78102 ] 00:25:41.808 [2024-11-09 16:35:01.273412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:41.808 [2024-11-09 16:35:01.494110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.070 [2024-11-09 16:35:01.784909] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:42.070 [2024-11-09 16:35:01.785292] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:42.333 [2024-11-09 16:35:01.942062] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.333 [2024-11-09 16:35:01.942126] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:42.333 [2024-11-09 16:35:01.942141] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:42.333 [2024-11-09 16:35:01.942153] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.333 [2024-11-09 16:35:01.942208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.333 [2024-11-09 16:35:01.942219] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:42.333 [2024-11-09 16:35:01.942252] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:25:42.333 [2024-11-09 16:35:01.942260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.333 [2024-11-09 16:35:01.942281] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:42.333 [2024-11-09 16:35:01.943053] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:42.333 [2024-11-09 16:35:01.943092] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.333 [2024-11-09 16:35:01.943101] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:42.333 [2024-11-09 16:35:01.943110] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.815 ms 00:25:42.333 [2024-11-09 16:35:01.943118] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.333 [2024-11-09 16:35:01.945080] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:42.333 [2024-11-09 16:35:01.959776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.333 [2024-11-09 16:35:01.959839] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:42.333 [2024-11-09 16:35:01.959854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.699 ms 00:25:42.333 [2024-11-09 16:35:01.959862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.333 [2024-11-09 16:35:01.959941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.333 [2024-11-09 16:35:01.959951] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:42.333 [2024-11-09 16:35:01.959961] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:42.333 [2024-11-09 16:35:01.959969] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.333 [2024-11-09 16:35:01.968215] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.333 [2024-11-09 16:35:01.968272] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:42.333 [2024-11-09 16:35:01.968283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.166 ms 00:25:42.333 [2024-11-09 16:35:01.968292] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.333 [2024-11-09 16:35:01.968406] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.333 [2024-11-09 16:35:01.968417] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:42.333 [2024-11-09 16:35:01.968426] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:25:42.333 [2024-11-09 16:35:01.968435] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.333 [2024-11-09 16:35:01.968483] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.333 [2024-11-09 16:35:01.968493] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:42.333 [2024-11-09 16:35:01.968502] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:42.333 [2024-11-09 16:35:01.968509] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.333 [2024-11-09 16:35:01.968541] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:42.333 [2024-11-09 16:35:01.972800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.333 [2024-11-09 16:35:01.972841] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:42.333 [2024-11-09 16:35:01.972853] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.273 ms 00:25:42.333 [2024-11-09 16:35:01.972860] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.333 [2024-11-09 16:35:01.972899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.333 [2024-11-09 16:35:01.972908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:42.333 [2024-11-09 16:35:01.972916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:42.333 [2024-11-09 16:35:01.972949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.333 [2024-11-09 16:35:01.973001] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:42.333 [2024-11-09 16:35:01.973025] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x138 bytes 00:25:42.333 [2024-11-09 16:35:01.973060] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:42.333 [2024-11-09 16:35:01.973077] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x140 bytes 00:25:42.333 [2024-11-09 16:35:01.973152] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x138 bytes 00:25:42.333 [2024-11-09 16:35:01.973161] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:42.333 [2024-11-09 16:35:01.973175] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x140 bytes 00:25:42.333 [2024-11-09 16:35:01.973186] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:42.333 [2024-11-09 16:35:01.973195] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:42.333 [2024-11-09 16:35:01.973203] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:42.333 [2024-11-09 16:35:01.973211] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:42.333 [2024-11-09 16:35:01.973219] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 1024 00:25:42.333 [2024-11-09 16:35:01.973251] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 4 00:25:42.333 [2024-11-09 16:35:01.973260] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.333 [2024-11-09 16:35:01.973268] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:42.333 [2024-11-09 16:35:01.973276] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:25:42.333 [2024-11-09 16:35:01.973283] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.333 [2024-11-09 16:35:01.973352] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.333 [2024-11-09 16:35:01.973361] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:42.333 [2024-11-09 16:35:01.973370] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:42.333 [2024-11-09 16:35:01.973377] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.333 [2024-11-09 16:35:01.973451] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:42.333 [2024-11-09 16:35:01.973461] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:42.333 [2024-11-09 16:35:01.973469] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:42.333 [2024-11-09 16:35:01.973478] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.333 [2024-11-09 16:35:01.973486] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:42.333 [2024-11-09 16:35:01.973493] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:42.333 [2024-11-09 16:35:01.973500] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:42.333 [2024-11-09 16:35:01.973509] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:42.333 [2024-11-09 16:35:01.973516] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:42.333 [2024-11-09 16:35:01.973523] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:42.333 [2024-11-09 16:35:01.973533] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:42.333 [2024-11-09 16:35:01.973540] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:42.333 [2024-11-09 16:35:01.973546] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:42.333 [2024-11-09 16:35:01.973553] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:42.333 [2024-11-09 16:35:01.973560] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.62 MiB 00:25:42.333 [2024-11-09 16:35:01.973570] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.333 [2024-11-09 16:35:01.973597] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:42.333 [2024-11-09 16:35:01.973604] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.75 MiB 00:25:42.334 [2024-11-09 16:35:01.973611] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.334 [2024-11-09 16:35:01.973618] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_nvc 00:25:42.334 [2024-11-09 16:35:01.973624] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.88 MiB 00:25:42.334 [2024-11-09 16:35:01.973631] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4096.00 MiB 00:25:42.334 [2024-11-09 16:35:01.973638] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:42.334 [2024-11-09 16:35:01.973645] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:42.334 [2024-11-09 16:35:01.973652] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:42.334 [2024-11-09 16:35:01.973659] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:42.334 [2024-11-09 16:35:01.973666] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 85.12 MiB 00:25:42.334 [2024-11-09 16:35:01.973673] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:42.334 [2024-11-09 16:35:01.973679] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:42.334 [2024-11-09 16:35:01.973686] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:42.334 [2024-11-09 16:35:01.973693] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:42.334 [2024-11-09 16:35:01.973701] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:42.334 [2024-11-09 16:35:01.973707] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 93.12 MiB 00:25:42.334 [2024-11-09 16:35:01.973714] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 4.00 MiB 00:25:42.334 [2024-11-09 16:35:01.973721] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:42.334 [2024-11-09 16:35:01.973727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:42.334 [2024-11-09 16:35:01.973734] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:42.334 [2024-11-09 16:35:01.973740] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:42.334 [2024-11-09 16:35:01.973747] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.38 MiB 00:25:42.334 [2024-11-09 16:35:01.973754] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:42.334 [2024-11-09 16:35:01.973760] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:42.334 [2024-11-09 16:35:01.973772] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:42.334 [2024-11-09 16:35:01.973783] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:42.334 [2024-11-09 16:35:01.973790] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:42.334 [2024-11-09 16:35:01.973798] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:42.334 [2024-11-09 16:35:01.973805] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:42.334 [2024-11-09 16:35:01.973811] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:42.334 [2024-11-09 16:35:01.973818] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:42.334 [2024-11-09 16:35:01.973825] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:42.334 [2024-11-09 16:35:01.973832] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:42.334 [2024-11-09 16:35:01.973841] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:42.334 [2024-11-09 16:35:01.973851] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.334 [2024-11-09 16:35:01.973859] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:42.334 [2024-11-09 16:35:01.973866] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:1 blk_offs:0x5020 blk_sz:0x80 00:25:42.334 [2024-11-09 16:35:01.973874] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:1 blk_offs:0x50a0 blk_sz:0x80 00:25:42.334 [2024-11-09 16:35:01.973880] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:1 blk_offs:0x5120 blk_sz:0x400 00:25:42.334 [2024-11-09 16:35:01.973888] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:1 blk_offs:0x5520 blk_sz:0x400 00:25:42.334 [2024-11-09 16:35:01.973895] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:1 blk_offs:0x5920 blk_sz:0x400 00:25:42.334 [2024-11-09 16:35:01.973902] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:1 blk_offs:0x5d20 blk_sz:0x400 00:25:42.334 [2024-11-09 16:35:01.973909] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x6120 blk_sz:0x40 00:25:42.334 [2024-11-09 16:35:01.973916] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x6160 blk_sz:0x40 00:25:42.334 [2024-11-09 16:35:01.973923] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:1 blk_offs:0x61a0 blk_sz:0x20 00:25:42.334 [2024-11-09 16:35:01.973932] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:1 blk_offs:0x61c0 blk_sz:0x20 00:25:42.334 [2024-11-09 16:35:01.973939] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x8 ver:0 blk_offs:0x61e0 blk_sz:0x100000 00:25:42.334 [2024-11-09 16:35:01.973948] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x1061e0 blk_sz:0x3d120 00:25:42.334 [2024-11-09 16:35:01.973956] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:42.334 [2024-11-09 16:35:01.973964] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:42.334 [2024-11-09 16:35:01.973972] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:42.334 [2024-11-09 16:35:01.973980] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:42.334 [2024-11-09 16:35:01.973987] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:42.334 [2024-11-09 16:35:01.973994] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:42.334 [2024-11-09 16:35:01.974002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.334 [2024-11-09 16:35:01.974009] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:42.334 [2024-11-09 16:35:01.974017] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:25:42.334 [2024-11-09 16:35:01.974025] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.334 [2024-11-09 16:35:01.992620] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.334 [2024-11-09 16:35:01.992676] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:42.334 [2024-11-09 16:35:01.992690] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.551 ms 00:25:42.334 [2024-11-09 16:35:01.992706] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.334 [2024-11-09 16:35:01.992803] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.334 [2024-11-09 16:35:01.992812] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:42.334 [2024-11-09 16:35:01.992820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:42.334 [2024-11-09 16:35:01.992829] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.334 [2024-11-09 16:35:02.040034] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.334 [2024-11-09 16:35:02.040234] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:42.334 [2024-11-09 16:35:02.040257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.150 ms 00:25:42.334 [2024-11-09 16:35:02.040266] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.334 [2024-11-09 16:35:02.040319] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.334 [2024-11-09 16:35:02.040329] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:42.334 [2024-11-09 16:35:02.040338] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:42.334 [2024-11-09 16:35:02.040346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.334 [2024-11-09 16:35:02.040908] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.334 [2024-11-09 16:35:02.040949] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:42.334 [2024-11-09 16:35:02.040959] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.508 ms 00:25:42.334 [2024-11-09 16:35:02.040974] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.334 [2024-11-09 16:35:02.041104] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.334 [2024-11-09 16:35:02.041115] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:42.334 [2024-11-09 16:35:02.041124] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:25:42.334 [2024-11-09 16:35:02.041132] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.334 [2024-11-09 16:35:02.057879] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.334 [2024-11-09 16:35:02.057927] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:42.334 [2024-11-09 16:35:02.057938] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.721 ms 00:25:42.334 [2024-11-09 16:35:02.057947] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.334 [2024-11-09 16:35:02.072412] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:42.334 [2024-11-09 16:35:02.072464] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:42.334 [2024-11-09 16:35:02.072477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.334 [2024-11-09 16:35:02.072486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:42.334 [2024-11-09 16:35:02.072496] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.417 ms 00:25:42.334 [2024-11-09 16:35:02.072503] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.334 [2024-11-09 16:35:02.098793] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.334 [2024-11-09 16:35:02.098844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:42.334 [2024-11-09 16:35:02.098857] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.236 ms 00:25:42.334 [2024-11-09 16:35:02.098865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.112169] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.597 [2024-11-09 16:35:02.112218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:42.597 [2024-11-09 16:35:02.112248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.250 ms 00:25:42.597 [2024-11-09 16:35:02.112256] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.125075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.597 [2024-11-09 16:35:02.125132] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:42.597 [2024-11-09 16:35:02.125144] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.772 ms 00:25:42.597 [2024-11-09 16:35:02.125151] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.125563] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.597 [2024-11-09 16:35:02.125580] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:42.597 [2024-11-09 16:35:02.125589] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:25:42.597 [2024-11-09 16:35:02.125598] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.192742] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.597 [2024-11-09 16:35:02.192803] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:42.597 [2024-11-09 16:35:02.192820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.124 ms 00:25:42.597 [2024-11-09 16:35:02.192828] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.204407] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:42.597 [2024-11-09 16:35:02.207489] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.597 [2024-11-09 16:35:02.207532] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:42.597 [2024-11-09 16:35:02.207544] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.600 ms 00:25:42.597 [2024-11-09 16:35:02.207559] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.207634] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.597 [2024-11-09 16:35:02.207644] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:42.597 [2024-11-09 16:35:02.207654] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:42.597 [2024-11-09 16:35:02.207663] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.208525] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.597 [2024-11-09 16:35:02.208574] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:42.597 [2024-11-09 16:35:02.208587] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.824 ms 00:25:42.597 [2024-11-09 16:35:02.208596] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.210028] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.597 [2024-11-09 16:35:02.210075] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Free P2L region bufs 00:25:42.597 [2024-11-09 16:35:02.210086] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.399 ms 00:25:42.597 [2024-11-09 16:35:02.210095] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.210132] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.597 [2024-11-09 16:35:02.210140] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:42.597 [2024-11-09 16:35:02.210154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:42.597 [2024-11-09 16:35:02.210162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.210199] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:42.597 [2024-11-09 16:35:02.210210] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.597 [2024-11-09 16:35:02.210243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:42.597 [2024-11-09 16:35:02.210253] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:42.597 [2024-11-09 16:35:02.210260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.236724] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.597 [2024-11-09 16:35:02.236774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:42.597 [2024-11-09 16:35:02.236787] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.444 ms 00:25:42.597 [2024-11-09 16:35:02.236795] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.236886] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:42.597 [2024-11-09 16:35:02.236897] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:42.597 [2024-11-09 16:35:02.236907] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:42.597 [2024-11-09 16:35:02.236915] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:42.597 [2024-11-09 16:35:02.238213] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 295.665 ms, result 0 00:25:43.988  [2024-11-09T16:35:04.704Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-09T16:35:05.649Z] Copying: 34/1024 [MB] (12 MBps) [2024-11-09T16:35:06.592Z] Copying: 49/1024 [MB] (15 MBps) [2024-11-09T16:35:07.532Z] Copying: 60/1024 [MB] (11 MBps) [2024-11-09T16:35:08.479Z] Copying: 85/1024 [MB] (24 MBps) [2024-11-09T16:35:09.424Z] Copying: 105/1024 [MB] (19 MBps) [2024-11-09T16:35:10.812Z] Copying: 124/1024 [MB] (18 MBps) [2024-11-09T16:35:11.763Z] Copying: 136/1024 [MB] (12 MBps) [2024-11-09T16:35:12.707Z] Copying: 151/1024 [MB] (14 MBps) [2024-11-09T16:35:13.651Z] Copying: 171/1024 [MB] (20 MBps) [2024-11-09T16:35:14.592Z] Copying: 187/1024 [MB] (15 MBps) [2024-11-09T16:35:15.533Z] Copying: 201/1024 [MB] (14 MBps) [2024-11-09T16:35:16.472Z] Copying: 212/1024 [MB] (10 MBps) [2024-11-09T16:35:17.858Z] Copying: 229/1024 [MB] (17 MBps) [2024-11-09T16:35:18.430Z] Copying: 246/1024 [MB] (16 MBps) [2024-11-09T16:35:19.820Z] Copying: 261/1024 [MB] (14 MBps) [2024-11-09T16:35:20.765Z] Copying: 276/1024 [MB] (15 MBps) [2024-11-09T16:35:21.709Z] Copying: 294/1024 [MB] (17 MBps) [2024-11-09T16:35:22.653Z] Copying: 310/1024 [MB] (16 MBps) [2024-11-09T16:35:23.596Z] Copying: 321/1024 [MB] (10 MBps) [2024-11-09T16:35:24.541Z] Copying: 332/1024 [MB] (11 MBps) [2024-11-09T16:35:25.484Z] Copying: 345/1024 [MB] (13 MBps) [2024-11-09T16:35:26.459Z] Copying: 356/1024 [MB] (10 MBps) [2024-11-09T16:35:27.846Z] Copying: 373/1024 [MB] (17 MBps) [2024-11-09T16:35:28.419Z] Copying: 384/1024 [MB] (11 MBps) [2024-11-09T16:35:29.809Z] Copying: 399/1024 [MB] (15 MBps) [2024-11-09T16:35:30.752Z] Copying: 412/1024 [MB] (13 MBps) [2024-11-09T16:35:31.687Z] Copying: 423/1024 [MB] (10 MBps) [2024-11-09T16:35:32.631Z] Copying: 439/1024 [MB] (15 MBps) [2024-11-09T16:35:33.573Z] Copying: 456/1024 [MB] (17 MBps) [2024-11-09T16:35:34.517Z] Copying: 467/1024 [MB] (10 MBps) [2024-11-09T16:35:35.461Z] Copying: 477/1024 [MB] (10 MBps) [2024-11-09T16:35:36.846Z] Copying: 488/1024 [MB] (10 MBps) [2024-11-09T16:35:37.784Z] Copying: 498/1024 [MB] (10 MBps) [2024-11-09T16:35:38.726Z] Copying: 515/1024 [MB] (16 MBps) [2024-11-09T16:35:39.666Z] Copying: 525/1024 [MB] (10 MBps) [2024-11-09T16:35:40.637Z] Copying: 538/1024 [MB] (12 MBps) [2024-11-09T16:35:41.573Z] Copying: 550/1024 [MB] (12 MBps) [2024-11-09T16:35:42.516Z] Copying: 569/1024 [MB] (18 MBps) [2024-11-09T16:35:43.459Z] Copying: 579/1024 [MB] (10 MBps) [2024-11-09T16:35:44.838Z] Copying: 590/1024 [MB] (10 MBps) [2024-11-09T16:35:45.780Z] Copying: 604/1024 [MB] (13 MBps) [2024-11-09T16:35:46.718Z] Copying: 615/1024 [MB] (11 MBps) [2024-11-09T16:35:47.657Z] Copying: 630/1024 [MB] (14 MBps) [2024-11-09T16:35:48.595Z] Copying: 641/1024 [MB] (11 MBps) [2024-11-09T16:35:49.534Z] Copying: 653/1024 [MB] (12 MBps) [2024-11-09T16:35:50.478Z] Copying: 664/1024 [MB] (11 MBps) [2024-11-09T16:35:51.420Z] Copying: 677/1024 [MB] (12 MBps) [2024-11-09T16:35:52.797Z] Copying: 689/1024 [MB] (12 MBps) [2024-11-09T16:35:53.732Z] Copying: 705/1024 [MB] (15 MBps) [2024-11-09T16:35:54.670Z] Copying: 723/1024 [MB] (17 MBps) [2024-11-09T16:35:55.642Z] Copying: 733/1024 [MB] (10 MBps) [2024-11-09T16:35:56.584Z] Copying: 749/1024 [MB] (15 MBps) [2024-11-09T16:35:57.528Z] Copying: 760/1024 [MB] (10 MBps) [2024-11-09T16:35:58.469Z] Copying: 770/1024 [MB] (10 MBps) [2024-11-09T16:35:59.850Z] Copying: 783/1024 [MB] (12 MBps) [2024-11-09T16:36:00.786Z] Copying: 799/1024 [MB] (15 MBps) [2024-11-09T16:36:01.724Z] Copying: 821/1024 [MB] (22 MBps) [2024-11-09T16:36:02.668Z] Copying: 847/1024 [MB] (25 MBps) [2024-11-09T16:36:03.612Z] Copying: 865/1024 [MB] (18 MBps) [2024-11-09T16:36:04.559Z] Copying: 887/1024 [MB] (21 MBps) [2024-11-09T16:36:05.499Z] Copying: 907/1024 [MB] (19 MBps) [2024-11-09T16:36:06.445Z] Copying: 927/1024 [MB] (20 MBps) [2024-11-09T16:36:07.835Z] Copying: 948/1024 [MB] (20 MBps) [2024-11-09T16:36:08.779Z] Copying: 968/1024 [MB] (19 MBps) [2024-11-09T16:36:09.757Z] Copying: 980/1024 [MB] (12 MBps) [2024-11-09T16:36:10.701Z] Copying: 991/1024 [MB] (10 MBps) [2024-11-09T16:36:11.648Z] Copying: 1001/1024 [MB] (10 MBps) [2024-11-09T16:36:12.593Z] Copying: 1012/1024 [MB] (10 MBps) [2024-11-09T16:36:12.593Z] Copying: 1022/1024 [MB] (10 MBps) [2024-11-09T16:36:12.856Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-09 16:36:12.670014] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.086 [2024-11-09 16:36:12.670103] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:53.086 [2024-11-09 16:36:12.670120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:53.086 [2024-11-09 16:36:12.670129] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.086 [2024-11-09 16:36:12.670154] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:53.086 [2024-11-09 16:36:12.673098] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.086 [2024-11-09 16:36:12.673424] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:53.086 [2024-11-09 16:36:12.673450] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.926 ms 00:26:53.086 [2024-11-09 16:36:12.673462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.086 [2024-11-09 16:36:12.673712] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.086 [2024-11-09 16:36:12.673725] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:53.086 [2024-11-09 16:36:12.673735] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:26:53.086 [2024-11-09 16:36:12.673744] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.086 [2024-11-09 16:36:12.677231] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.086 [2024-11-09 16:36:12.677273] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:53.086 [2024-11-09 16:36:12.677289] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.471 ms 00:26:53.086 [2024-11-09 16:36:12.677297] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.086 [2024-11-09 16:36:12.684727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.087 [2024-11-09 16:36:12.684777] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P unmaps 00:26:53.087 [2024-11-09 16:36:12.684789] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.390 ms 00:26:53.087 [2024-11-09 16:36:12.684798] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.087 [2024-11-09 16:36:12.712280] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.087 [2024-11-09 16:36:12.712486] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:53.087 [2024-11-09 16:36:12.712509] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.404 ms 00:26:53.087 [2024-11-09 16:36:12.712518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.087 [2024-11-09 16:36:12.729099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.087 [2024-11-09 16:36:12.729150] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:53.087 [2024-11-09 16:36:12.729166] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.459 ms 00:26:53.087 [2024-11-09 16:36:12.729181] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.087 [2024-11-09 16:36:12.737978] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.087 [2024-11-09 16:36:12.738028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:53.087 [2024-11-09 16:36:12.738041] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.714 ms 00:26:53.087 [2024-11-09 16:36:12.738048] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.087 [2024-11-09 16:36:12.764459] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.087 [2024-11-09 16:36:12.764506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist band info metadata 00:26:53.087 [2024-11-09 16:36:12.764519] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.393 ms 00:26:53.087 [2024-11-09 16:36:12.764526] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.087 [2024-11-09 16:36:12.790457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.087 [2024-11-09 16:36:12.790506] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: persist trim metadata 00:26:53.087 [2024-11-09 16:36:12.790532] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.881 ms 00:26:53.087 [2024-11-09 16:36:12.790539] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.087 [2024-11-09 16:36:12.815963] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.087 [2024-11-09 16:36:12.816165] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:53.087 [2024-11-09 16:36:12.816187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.376 ms 00:26:53.087 [2024-11-09 16:36:12.816195] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.087 [2024-11-09 16:36:12.841332] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.087 [2024-11-09 16:36:12.841380] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:53.087 [2024-11-09 16:36:12.841392] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.924 ms 00:26:53.087 [2024-11-09 16:36:12.841400] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.087 [2024-11-09 16:36:12.841446] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:53.087 [2024-11-09 16:36:12.841469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:53.087 [2024-11-09 16:36:12.841481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 3072 / 261120 wr_cnt: 1 state: open 00:26:53.087 [2024-11-09 16:36:12.841490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:53.087 [2024-11-09 16:36:12.841943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.841950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.841960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.841969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.841977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.841984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.841992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:53.088 [2024-11-09 16:36:12.842288] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:53.088 [2024-11-09 16:36:12.842296] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 902f49f9-b410-4954-8a62-8ab8809a921f 00:26:53.088 [2024-11-09 16:36:12.842304] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 264192 00:26:53.088 [2024-11-09 16:36:12.842312] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:53.088 [2024-11-09 16:36:12.842320] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:53.088 [2024-11-09 16:36:12.842329] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:53.088 [2024-11-09 16:36:12.842336] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:53.088 [2024-11-09 16:36:12.842344] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:53.088 [2024-11-09 16:36:12.842352] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:53.088 [2024-11-09 16:36:12.842366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:53.088 [2024-11-09 16:36:12.842373] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:53.088 [2024-11-09 16:36:12.842380] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.088 [2024-11-09 16:36:12.842389] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:53.088 [2024-11-09 16:36:12.842402] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.936 ms 00:26:53.088 [2024-11-09 16:36:12.842439] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:12.855976] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.349 [2024-11-09 16:36:12.856025] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:53.349 [2024-11-09 16:36:12.856038] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.500 ms 00:26:53.349 [2024-11-09 16:36:12.856046] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:12.856311] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:53.349 [2024-11-09 16:36:12.856323] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:53.349 [2024-11-09 16:36:12.856332] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:26:53.349 [2024-11-09 16:36:12.856340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:12.895771] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.349 [2024-11-09 16:36:12.895826] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:53.349 [2024-11-09 16:36:12.895838] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.349 [2024-11-09 16:36:12.895846] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:12.895919] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.349 [2024-11-09 16:36:12.895928] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:53.349 [2024-11-09 16:36:12.895937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.349 [2024-11-09 16:36:12.895946] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:12.896031] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.349 [2024-11-09 16:36:12.896042] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:53.349 [2024-11-09 16:36:12.896051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.349 [2024-11-09 16:36:12.896059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:12.896075] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.349 [2024-11-09 16:36:12.896088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:53.349 [2024-11-09 16:36:12.896096] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.349 [2024-11-09 16:36:12.896104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:12.975972] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.349 [2024-11-09 16:36:12.976029] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:53.349 [2024-11-09 16:36:12.976043] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.349 [2024-11-09 16:36:12.976051] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:13.008379] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.349 [2024-11-09 16:36:13.008435] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:53.349 [2024-11-09 16:36:13.008446] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.349 [2024-11-09 16:36:13.008455] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:13.008523] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.349 [2024-11-09 16:36:13.008533] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:53.349 [2024-11-09 16:36:13.008542] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.349 [2024-11-09 16:36:13.008550] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:13.008592] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.349 [2024-11-09 16:36:13.008602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:53.349 [2024-11-09 16:36:13.008615] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.349 [2024-11-09 16:36:13.008623] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:13.008725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.349 [2024-11-09 16:36:13.008735] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:53.349 [2024-11-09 16:36:13.008744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.349 [2024-11-09 16:36:13.008752] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:13.008787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.349 [2024-11-09 16:36:13.008797] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:53.349 [2024-11-09 16:36:13.008806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.349 [2024-11-09 16:36:13.008818] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:13.008860] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.349 [2024-11-09 16:36:13.008870] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:53.349 [2024-11-09 16:36:13.008892] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.349 [2024-11-09 16:36:13.008901] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:13.008949] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:53.349 [2024-11-09 16:36:13.008959] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:53.349 [2024-11-09 16:36:13.008971] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:53.349 [2024-11-09 16:36:13.008979] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:53.349 [2024-11-09 16:36:13.009111] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.067 ms, result 0 00:26:54.293 00:26:54.293 00:26:54.293 16:36:13 -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:56.210 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:56.210 16:36:15 -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:56.210 16:36:15 -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:56.210 16:36:15 -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:56.210 16:36:15 -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:56.471 16:36:15 -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:56.471 16:36:16 -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:56.471 16:36:16 -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:56.471 Process with pid 76179 is not found 00:26:56.472 16:36:16 -- ftl/dirty_shutdown.sh@37 -- # killprocess 76179 00:26:56.472 16:36:16 -- common/autotest_common.sh@936 -- # '[' -z 76179 ']' 00:26:56.472 16:36:16 -- common/autotest_common.sh@940 -- # kill -0 76179 00:26:56.472 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (76179) - No such process 00:26:56.472 16:36:16 -- common/autotest_common.sh@963 -- # echo 'Process with pid 76179 is not found' 00:26:56.472 16:36:16 -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:56.734 Remove shared memory files 00:26:56.734 16:36:16 -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:56.734 16:36:16 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:56.734 16:36:16 -- ftl/common.sh@205 -- # rm -f rm -f 00:26:56.734 16:36:16 -- ftl/common.sh@206 -- # rm -f rm -f 00:26:56.734 16:36:16 -- ftl/common.sh@207 -- # rm -f rm -f 00:26:56.734 16:36:16 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:56.734 16:36:16 -- ftl/common.sh@209 -- # rm -f rm -f 00:26:56.734 ************************************ 00:26:56.734 END TEST ftl_dirty_shutdown 00:26:56.734 ************************************ 00:26:56.734 00:26:56.734 real 4m14.994s 00:26:56.734 user 4m43.574s 00:26:56.734 sys 0m28.908s 00:26:56.734 16:36:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:26:56.734 16:36:16 -- common/autotest_common.sh@10 -- # set +x 00:26:56.734 16:36:16 -- ftl/ftl.sh@79 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:56.734 16:36:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:26:56.734 16:36:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:56.734 16:36:16 -- common/autotest_common.sh@10 -- # set +x 00:26:56.734 ************************************ 00:26:56.734 START TEST ftl_upgrade_shutdown 00:26:56.734 ************************************ 00:26:56.734 16:36:16 -- common/autotest_common.sh@1114 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:07.0 0000:00:06.0 00:26:56.996 * Looking for test storage... 00:26:56.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:56.996 16:36:16 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:26:56.996 16:36:16 -- common/autotest_common.sh@1690 -- # lcov --version 00:26:56.996 16:36:16 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:26:56.996 16:36:16 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:26:56.996 16:36:16 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:26:56.996 16:36:16 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:26:56.996 16:36:16 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:26:56.996 16:36:16 -- scripts/common.sh@335 -- # IFS=.-: 00:26:56.996 16:36:16 -- scripts/common.sh@335 -- # read -ra ver1 00:26:56.996 16:36:16 -- scripts/common.sh@336 -- # IFS=.-: 00:26:56.996 16:36:16 -- scripts/common.sh@336 -- # read -ra ver2 00:26:56.996 16:36:16 -- scripts/common.sh@337 -- # local 'op=<' 00:26:56.996 16:36:16 -- scripts/common.sh@339 -- # ver1_l=2 00:26:56.996 16:36:16 -- scripts/common.sh@340 -- # ver2_l=1 00:26:56.996 16:36:16 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:26:56.996 16:36:16 -- scripts/common.sh@343 -- # case "$op" in 00:26:56.996 16:36:16 -- scripts/common.sh@344 -- # : 1 00:26:56.996 16:36:16 -- scripts/common.sh@363 -- # (( v = 0 )) 00:26:56.996 16:36:16 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:56.996 16:36:16 -- scripts/common.sh@364 -- # decimal 1 00:26:56.996 16:36:16 -- scripts/common.sh@352 -- # local d=1 00:26:56.996 16:36:16 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:56.996 16:36:16 -- scripts/common.sh@354 -- # echo 1 00:26:56.996 16:36:16 -- scripts/common.sh@364 -- # ver1[v]=1 00:26:56.996 16:36:16 -- scripts/common.sh@365 -- # decimal 2 00:26:56.996 16:36:16 -- scripts/common.sh@352 -- # local d=2 00:26:56.996 16:36:16 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:56.996 16:36:16 -- scripts/common.sh@354 -- # echo 2 00:26:56.996 16:36:16 -- scripts/common.sh@365 -- # ver2[v]=2 00:26:56.996 16:36:16 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:26:56.996 16:36:16 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:26:56.996 16:36:16 -- scripts/common.sh@367 -- # return 0 00:26:56.996 16:36:16 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:56.996 16:36:16 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:26:56.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.996 --rc genhtml_branch_coverage=1 00:26:56.996 --rc genhtml_function_coverage=1 00:26:56.996 --rc genhtml_legend=1 00:26:56.996 --rc geninfo_all_blocks=1 00:26:56.996 --rc geninfo_unexecuted_blocks=1 00:26:56.996 00:26:56.996 ' 00:26:56.996 16:36:16 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:26:56.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.996 --rc genhtml_branch_coverage=1 00:26:56.996 --rc genhtml_function_coverage=1 00:26:56.996 --rc genhtml_legend=1 00:26:56.996 --rc geninfo_all_blocks=1 00:26:56.996 --rc geninfo_unexecuted_blocks=1 00:26:56.996 00:26:56.996 ' 00:26:56.996 16:36:16 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:26:56.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.996 --rc genhtml_branch_coverage=1 00:26:56.996 --rc genhtml_function_coverage=1 00:26:56.996 --rc genhtml_legend=1 00:26:56.996 --rc geninfo_all_blocks=1 00:26:56.996 --rc geninfo_unexecuted_blocks=1 00:26:56.996 00:26:56.996 ' 00:26:56.996 16:36:16 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:26:56.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:56.996 --rc genhtml_branch_coverage=1 00:26:56.996 --rc genhtml_function_coverage=1 00:26:56.996 --rc genhtml_legend=1 00:26:56.996 --rc geninfo_all_blocks=1 00:26:56.996 --rc geninfo_unexecuted_blocks=1 00:26:56.996 00:26:56.996 ' 00:26:56.996 16:36:16 -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:56.996 16:36:16 -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:56.996 16:36:16 -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:56.996 16:36:16 -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:56.996 16:36:16 -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:56.996 16:36:16 -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:56.996 16:36:16 -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:56.996 16:36:16 -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:56.996 16:36:16 -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:56.996 16:36:16 -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:56.996 16:36:16 -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:56.996 16:36:16 -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:56.996 16:36:16 -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:56.996 16:36:16 -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:56.996 16:36:16 -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:56.996 16:36:16 -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:56.996 16:36:16 -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:56.996 16:36:16 -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:56.997 16:36:16 -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:56.997 16:36:16 -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:56.997 16:36:16 -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:56.997 16:36:16 -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:56.997 16:36:16 -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:56.997 16:36:16 -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:56.997 16:36:16 -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:56.997 16:36:16 -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:56.997 16:36:16 -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:56.997 16:36:16 -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:56.997 16:36:16 -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:07.0 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:07.0 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:06.0 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:06.0 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:56.997 16:36:16 -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:56.997 16:36:16 -- ftl/common.sh@81 -- # local base_bdev= 00:26:56.997 16:36:16 -- ftl/common.sh@82 -- # local cache_bdev= 00:26:56.997 16:36:16 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:56.997 16:36:16 -- ftl/common.sh@89 -- # spdk_tgt_pid=78945 00:26:56.997 16:36:16 -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:56.997 16:36:16 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:56.997 16:36:16 -- ftl/common.sh@91 -- # waitforlisten 78945 00:26:56.997 16:36:16 -- common/autotest_common.sh@829 -- # '[' -z 78945 ']' 00:26:56.997 16:36:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:56.997 16:36:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:56.997 16:36:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:56.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:56.997 16:36:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:56.997 16:36:16 -- common/autotest_common.sh@10 -- # set +x 00:26:56.997 [2024-11-09 16:36:16.699825] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:26:56.997 [2024-11-09 16:36:16.700152] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78945 ] 00:26:57.259 [2024-11-09 16:36:16.855665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.520 [2024-11-09 16:36:17.077560] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:57.520 [2024-11-09 16:36:17.078028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:58.465 16:36:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:58.465 16:36:18 -- common/autotest_common.sh@862 -- # return 0 00:26:58.465 16:36:18 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:58.465 16:36:18 -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:58.465 16:36:18 -- ftl/common.sh@99 -- # local params 00:26:58.465 16:36:18 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:58.465 16:36:18 -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:58.465 16:36:18 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:58.465 16:36:18 -- ftl/common.sh@101 -- # [[ -z 0000:00:07.0 ]] 00:26:58.465 16:36:18 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:58.465 16:36:18 -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:58.465 16:36:18 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:58.465 16:36:18 -- ftl/common.sh@101 -- # [[ -z 0000:00:06.0 ]] 00:26:58.465 16:36:18 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:58.465 16:36:18 -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:58.465 16:36:18 -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:58.465 16:36:18 -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:58.727 16:36:18 -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:07.0 20480 00:26:58.727 16:36:18 -- ftl/common.sh@54 -- # local name=base 00:26:58.727 16:36:18 -- ftl/common.sh@55 -- # local base_bdf=0000:00:07.0 00:26:58.727 16:36:18 -- ftl/common.sh@56 -- # local size=20480 00:26:58.727 16:36:18 -- ftl/common.sh@59 -- # local base_bdev 00:26:58.727 16:36:18 -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:07.0 00:26:58.988 16:36:18 -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:58.988 16:36:18 -- ftl/common.sh@62 -- # local base_size 00:26:58.988 16:36:18 -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:58.988 16:36:18 -- common/autotest_common.sh@1367 -- # local bdev_name=basen1 00:26:58.988 16:36:18 -- common/autotest_common.sh@1368 -- # local bdev_info 00:26:58.988 16:36:18 -- common/autotest_common.sh@1369 -- # local bs 00:26:58.988 16:36:18 -- common/autotest_common.sh@1370 -- # local nb 00:26:58.988 16:36:18 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:58.988 16:36:18 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:26:58.988 { 00:26:58.988 "name": "basen1", 00:26:58.988 "aliases": [ 00:26:58.988 "cd1f81a4-9eec-4b32-9038-6946718e444e" 00:26:58.988 ], 00:26:58.988 "product_name": "NVMe disk", 00:26:58.988 "block_size": 4096, 00:26:58.988 "num_blocks": 1310720, 00:26:58.988 "uuid": "cd1f81a4-9eec-4b32-9038-6946718e444e", 00:26:58.988 "assigned_rate_limits": { 00:26:58.988 "rw_ios_per_sec": 0, 00:26:58.988 "rw_mbytes_per_sec": 0, 00:26:58.988 "r_mbytes_per_sec": 0, 00:26:58.988 "w_mbytes_per_sec": 0 00:26:58.988 }, 00:26:58.988 "claimed": true, 00:26:58.988 "claim_type": "read_many_write_one", 00:26:58.988 "zoned": false, 00:26:58.988 "supported_io_types": { 00:26:58.988 "read": true, 00:26:58.988 "write": true, 00:26:58.988 "unmap": true, 00:26:58.988 "write_zeroes": true, 00:26:58.988 "flush": true, 00:26:58.988 "reset": true, 00:26:58.988 "compare": true, 00:26:58.988 "compare_and_write": false, 00:26:58.988 "abort": true, 00:26:58.988 "nvme_admin": true, 00:26:58.988 "nvme_io": true 00:26:58.988 }, 00:26:58.988 "driver_specific": { 00:26:58.988 "nvme": [ 00:26:58.988 { 00:26:58.988 "pci_address": "0000:00:07.0", 00:26:58.988 "trid": { 00:26:58.988 "trtype": "PCIe", 00:26:58.988 "traddr": "0000:00:07.0" 00:26:58.988 }, 00:26:58.988 "ctrlr_data": { 00:26:58.988 "cntlid": 0, 00:26:58.988 "vendor_id": "0x1b36", 00:26:58.988 "model_number": "QEMU NVMe Ctrl", 00:26:58.988 "serial_number": "12341", 00:26:58.988 "firmware_revision": "8.0.0", 00:26:58.988 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:58.988 "oacs": { 00:26:58.988 "security": 0, 00:26:58.988 "format": 1, 00:26:58.988 "firmware": 0, 00:26:58.988 "ns_manage": 1 00:26:58.988 }, 00:26:58.988 "multi_ctrlr": false, 00:26:58.988 "ana_reporting": false 00:26:58.989 }, 00:26:58.989 "vs": { 00:26:58.989 "nvme_version": "1.4" 00:26:58.989 }, 00:26:58.989 "ns_data": { 00:26:58.989 "id": 1, 00:26:58.989 "can_share": false 00:26:58.989 } 00:26:58.989 } 00:26:58.989 ], 00:26:58.989 "mp_policy": "active_passive" 00:26:58.989 } 00:26:58.989 } 00:26:58.989 ]' 00:26:58.989 16:36:18 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:26:59.251 16:36:18 -- common/autotest_common.sh@1372 -- # bs=4096 00:26:59.251 16:36:18 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:26:59.251 16:36:18 -- common/autotest_common.sh@1373 -- # nb=1310720 00:26:59.251 16:36:18 -- common/autotest_common.sh@1376 -- # bdev_size=5120 00:26:59.251 16:36:18 -- common/autotest_common.sh@1377 -- # echo 5120 00:26:59.251 16:36:18 -- ftl/common.sh@63 -- # base_size=5120 00:26:59.251 16:36:18 -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:59.251 16:36:18 -- ftl/common.sh@67 -- # clear_lvols 00:26:59.251 16:36:18 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:59.251 16:36:18 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:59.513 16:36:19 -- ftl/common.sh@28 -- # stores=8a65c4cb-9e44-4c36-b772-c6b150cda28d 00:26:59.513 16:36:19 -- ftl/common.sh@29 -- # for lvs in $stores 00:26:59.513 16:36:19 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8a65c4cb-9e44-4c36-b772-c6b150cda28d 00:26:59.513 16:36:19 -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:59.772 16:36:19 -- ftl/common.sh@68 -- # lvs=615191f5-59cd-4d75-865b-b017097f4e9a 00:26:59.772 16:36:19 -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 615191f5-59cd-4d75-865b-b017097f4e9a 00:27:00.031 16:36:19 -- ftl/common.sh@107 -- # base_bdev=9c5e6b1a-a956-48c1-b3dc-10dad50ccc19 00:27:00.031 16:36:19 -- ftl/common.sh@108 -- # [[ -z 9c5e6b1a-a956-48c1-b3dc-10dad50ccc19 ]] 00:27:00.032 16:36:19 -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:06.0 9c5e6b1a-a956-48c1-b3dc-10dad50ccc19 5120 00:27:00.032 16:36:19 -- ftl/common.sh@35 -- # local name=cache 00:27:00.032 16:36:19 -- ftl/common.sh@36 -- # local cache_bdf=0000:00:06.0 00:27:00.032 16:36:19 -- ftl/common.sh@37 -- # local base_bdev=9c5e6b1a-a956-48c1-b3dc-10dad50ccc19 00:27:00.032 16:36:19 -- ftl/common.sh@38 -- # local cache_size=5120 00:27:00.032 16:36:19 -- ftl/common.sh@41 -- # get_bdev_size 9c5e6b1a-a956-48c1-b3dc-10dad50ccc19 00:27:00.032 16:36:19 -- common/autotest_common.sh@1367 -- # local bdev_name=9c5e6b1a-a956-48c1-b3dc-10dad50ccc19 00:27:00.032 16:36:19 -- common/autotest_common.sh@1368 -- # local bdev_info 00:27:00.032 16:36:19 -- common/autotest_common.sh@1369 -- # local bs 00:27:00.032 16:36:19 -- common/autotest_common.sh@1370 -- # local nb 00:27:00.032 16:36:19 -- common/autotest_common.sh@1371 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9c5e6b1a-a956-48c1-b3dc-10dad50ccc19 00:27:00.291 16:36:19 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:27:00.291 { 00:27:00.291 "name": "9c5e6b1a-a956-48c1-b3dc-10dad50ccc19", 00:27:00.291 "aliases": [ 00:27:00.291 "lvs/basen1p0" 00:27:00.291 ], 00:27:00.291 "product_name": "Logical Volume", 00:27:00.291 "block_size": 4096, 00:27:00.291 "num_blocks": 5242880, 00:27:00.291 "uuid": "9c5e6b1a-a956-48c1-b3dc-10dad50ccc19", 00:27:00.291 "assigned_rate_limits": { 00:27:00.291 "rw_ios_per_sec": 0, 00:27:00.291 "rw_mbytes_per_sec": 0, 00:27:00.291 "r_mbytes_per_sec": 0, 00:27:00.291 "w_mbytes_per_sec": 0 00:27:00.291 }, 00:27:00.291 "claimed": false, 00:27:00.291 "zoned": false, 00:27:00.291 "supported_io_types": { 00:27:00.291 "read": true, 00:27:00.291 "write": true, 00:27:00.291 "unmap": true, 00:27:00.291 "write_zeroes": true, 00:27:00.291 "flush": false, 00:27:00.291 "reset": true, 00:27:00.291 "compare": false, 00:27:00.291 "compare_and_write": false, 00:27:00.291 "abort": false, 00:27:00.291 "nvme_admin": false, 00:27:00.291 "nvme_io": false 00:27:00.291 }, 00:27:00.291 "driver_specific": { 00:27:00.291 "lvol": { 00:27:00.291 "lvol_store_uuid": "615191f5-59cd-4d75-865b-b017097f4e9a", 00:27:00.291 "base_bdev": "basen1", 00:27:00.291 "thin_provision": true, 00:27:00.291 "snapshot": false, 00:27:00.291 "clone": false, 00:27:00.291 "esnap_clone": false 00:27:00.291 } 00:27:00.291 } 00:27:00.291 } 00:27:00.291 ]' 00:27:00.291 16:36:19 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:27:00.291 16:36:19 -- common/autotest_common.sh@1372 -- # bs=4096 00:27:00.291 16:36:19 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:27:00.291 16:36:19 -- common/autotest_common.sh@1373 -- # nb=5242880 00:27:00.291 16:36:19 -- common/autotest_common.sh@1376 -- # bdev_size=20480 00:27:00.291 16:36:19 -- common/autotest_common.sh@1377 -- # echo 20480 00:27:00.291 16:36:19 -- ftl/common.sh@41 -- # local base_size=1024 00:27:00.291 16:36:19 -- ftl/common.sh@44 -- # local nvc_bdev 00:27:00.291 16:36:19 -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:06.0 00:27:00.549 16:36:20 -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:00.549 16:36:20 -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:00.549 16:36:20 -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:00.809 16:36:20 -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:00.809 16:36:20 -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:00.809 16:36:20 -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 9c5e6b1a-a956-48c1-b3dc-10dad50ccc19 -c cachen1p0 --l2p_dram_limit 2 00:27:00.809 [2024-11-09 16:36:20.515997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.809 [2024-11-09 16:36:20.516038] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:00.809 [2024-11-09 16:36:20.516051] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:00.809 [2024-11-09 16:36:20.516059] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.809 [2024-11-09 16:36:20.516100] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.809 [2024-11-09 16:36:20.516108] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:00.809 [2024-11-09 16:36:20.516116] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:00.809 [2024-11-09 16:36:20.516121] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.809 [2024-11-09 16:36:20.516137] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:00.809 [2024-11-09 16:36:20.516711] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:00.809 [2024-11-09 16:36:20.516767] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.809 [2024-11-09 16:36:20.516774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:00.809 [2024-11-09 16:36:20.516784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.630 ms 00:27:00.809 [2024-11-09 16:36:20.516790] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.809 [2024-11-09 16:36:20.516983] mngt/ftl_mngt_md.c: 567:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 298dc075-7548-4667-aff5-e02a8d8d0c87 00:27:00.809 [2024-11-09 16:36:20.517973] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.809 [2024-11-09 16:36:20.517991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:00.809 [2024-11-09 16:36:20.517999] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:00.809 [2024-11-09 16:36:20.518008] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.809 [2024-11-09 16:36:20.522725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.809 [2024-11-09 16:36:20.522755] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:00.809 [2024-11-09 16:36:20.522762] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.685 ms 00:27:00.809 [2024-11-09 16:36:20.522770] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.809 [2024-11-09 16:36:20.522800] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.809 [2024-11-09 16:36:20.522808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:00.809 [2024-11-09 16:36:20.522814] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:00.809 [2024-11-09 16:36:20.522823] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.809 [2024-11-09 16:36:20.522859] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.809 [2024-11-09 16:36:20.522868] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:00.809 [2024-11-09 16:36:20.522875] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:00.809 [2024-11-09 16:36:20.522881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.809 [2024-11-09 16:36:20.522900] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:00.809 [2024-11-09 16:36:20.525828] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.809 [2024-11-09 16:36:20.525852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:00.809 [2024-11-09 16:36:20.525861] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.932 ms 00:27:00.809 [2024-11-09 16:36:20.525867] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.809 [2024-11-09 16:36:20.525890] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.809 [2024-11-09 16:36:20.525896] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:00.809 [2024-11-09 16:36:20.525904] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:00.809 [2024-11-09 16:36:20.525909] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.809 [2024-11-09 16:36:20.525924] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:00.809 [2024-11-09 16:36:20.526007] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:00.809 [2024-11-09 16:36:20.526019] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:00.809 [2024-11-09 16:36:20.526027] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:00.809 [2024-11-09 16:36:20.526036] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:00.809 [2024-11-09 16:36:20.526044] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:00.809 [2024-11-09 16:36:20.526052] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:00.809 [2024-11-09 16:36:20.526058] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:00.809 [2024-11-09 16:36:20.526065] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:00.810 [2024-11-09 16:36:20.526070] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:00.810 [2024-11-09 16:36:20.526078] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.810 [2024-11-09 16:36:20.526088] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:00.810 [2024-11-09 16:36:20.526095] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.153 ms 00:27:00.810 [2024-11-09 16:36:20.526101] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.810 [2024-11-09 16:36:20.526148] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.810 [2024-11-09 16:36:20.526155] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:00.810 [2024-11-09 16:36:20.526163] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:27:00.810 [2024-11-09 16:36:20.526168] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.810 [2024-11-09 16:36:20.526238] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:00.810 [2024-11-09 16:36:20.526246] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:00.810 [2024-11-09 16:36:20.526254] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:00.810 [2024-11-09 16:36:20.526260] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.810 [2024-11-09 16:36:20.526267] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:00.810 [2024-11-09 16:36:20.526272] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:00.810 [2024-11-09 16:36:20.526279] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:00.810 [2024-11-09 16:36:20.526284] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:00.810 [2024-11-09 16:36:20.526290] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:00.810 [2024-11-09 16:36:20.526295] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.810 [2024-11-09 16:36:20.526302] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:00.810 [2024-11-09 16:36:20.526307] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:00.810 [2024-11-09 16:36:20.526313] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.810 [2024-11-09 16:36:20.526318] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:00.810 [2024-11-09 16:36:20.526327] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:00.810 [2024-11-09 16:36:20.526332] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.810 [2024-11-09 16:36:20.526340] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:00.810 [2024-11-09 16:36:20.526344] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:00.810 [2024-11-09 16:36:20.526351] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.810 [2024-11-09 16:36:20.526355] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:00.810 [2024-11-09 16:36:20.526362] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:00.810 [2024-11-09 16:36:20.526368] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:00.810 [2024-11-09 16:36:20.526374] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:00.810 [2024-11-09 16:36:20.526378] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:00.810 [2024-11-09 16:36:20.526384] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:00.810 [2024-11-09 16:36:20.526389] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:00.810 [2024-11-09 16:36:20.526395] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:00.810 [2024-11-09 16:36:20.526400] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:00.810 [2024-11-09 16:36:20.526406] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:00.810 [2024-11-09 16:36:20.526411] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:00.810 [2024-11-09 16:36:20.526417] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:00.810 [2024-11-09 16:36:20.526422] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:00.810 [2024-11-09 16:36:20.526429] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:00.810 [2024-11-09 16:36:20.526434] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:00.810 [2024-11-09 16:36:20.526440] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:00.810 [2024-11-09 16:36:20.526445] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:00.810 [2024-11-09 16:36:20.526451] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.810 [2024-11-09 16:36:20.526456] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:00.810 [2024-11-09 16:36:20.526462] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:00.810 [2024-11-09 16:36:20.526467] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.810 [2024-11-09 16:36:20.526473] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:00.810 [2024-11-09 16:36:20.526479] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:00.810 [2024-11-09 16:36:20.526485] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:00.810 [2024-11-09 16:36:20.526492] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:00.810 [2024-11-09 16:36:20.526499] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:00.810 [2024-11-09 16:36:20.526504] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:00.810 [2024-11-09 16:36:20.526511] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:00.810 [2024-11-09 16:36:20.526517] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:00.810 [2024-11-09 16:36:20.526524] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:00.810 [2024-11-09 16:36:20.526529] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:00.810 [2024-11-09 16:36:20.526537] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:00.810 [2024-11-09 16:36:20.526543] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:00.810 [2024-11-09 16:36:20.526551] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:00.810 [2024-11-09 16:36:20.526556] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:00.810 [2024-11-09 16:36:20.526563] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:00.810 [2024-11-09 16:36:20.526568] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:00.810 [2024-11-09 16:36:20.526575] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:00.810 [2024-11-09 16:36:20.526581] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:00.810 [2024-11-09 16:36:20.526587] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:00.810 [2024-11-09 16:36:20.526592] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:00.810 [2024-11-09 16:36:20.526599] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:00.810 [2024-11-09 16:36:20.526604] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:00.810 [2024-11-09 16:36:20.526611] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:00.810 [2024-11-09 16:36:20.526616] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:00.810 [2024-11-09 16:36:20.526626] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:00.810 [2024-11-09 16:36:20.526631] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:00.810 [2024-11-09 16:36:20.526639] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:00.810 [2024-11-09 16:36:20.526645] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:00.810 [2024-11-09 16:36:20.526652] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:00.810 [2024-11-09 16:36:20.526657] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:00.810 [2024-11-09 16:36:20.526663] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:00.810 [2024-11-09 16:36:20.526668] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.810 [2024-11-09 16:36:20.526675] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:00.810 [2024-11-09 16:36:20.526680] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.480 ms 00:27:00.810 [2024-11-09 16:36:20.526687] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.810 [2024-11-09 16:36:20.538326] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.810 [2024-11-09 16:36:20.538448] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:00.810 [2024-11-09 16:36:20.538460] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.609 ms 00:27:00.810 [2024-11-09 16:36:20.538468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.810 [2024-11-09 16:36:20.538498] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.810 [2024-11-09 16:36:20.538509] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:00.810 [2024-11-09 16:36:20.538515] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:00.810 [2024-11-09 16:36:20.538522] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.810 [2024-11-09 16:36:20.562438] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.810 [2024-11-09 16:36:20.562465] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:00.810 [2024-11-09 16:36:20.562474] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 23.883 ms 00:27:00.810 [2024-11-09 16:36:20.562484] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.810 [2024-11-09 16:36:20.562505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.810 [2024-11-09 16:36:20.562514] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:00.810 [2024-11-09 16:36:20.562521] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:00.810 [2024-11-09 16:36:20.562530] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.810 [2024-11-09 16:36:20.562838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.810 [2024-11-09 16:36:20.562852] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:00.810 [2024-11-09 16:36:20.562858] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.275 ms 00:27:00.811 [2024-11-09 16:36:20.562865] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.811 [2024-11-09 16:36:20.562899] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.811 [2024-11-09 16:36:20.562908] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:00.811 [2024-11-09 16:36:20.562914] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:00.811 [2024-11-09 16:36:20.562921] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:00.811 [2024-11-09 16:36:20.575059] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:00.811 [2024-11-09 16:36:20.575087] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:00.811 [2024-11-09 16:36:20.575094] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.123 ms 00:27:00.811 [2024-11-09 16:36:20.575103] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.071 [2024-11-09 16:36:20.584037] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:01.071 [2024-11-09 16:36:20.584787] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.071 [2024-11-09 16:36:20.584811] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:01.071 [2024-11-09 16:36:20.584820] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.626 ms 00:27:01.071 [2024-11-09 16:36:20.584826] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.071 [2024-11-09 16:36:20.607239] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:01.071 [2024-11-09 16:36:20.607270] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:01.071 [2024-11-09 16:36:20.607281] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 22.393 ms 00:27:01.071 [2024-11-09 16:36:20.607287] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:01.071 [2024-11-09 16:36:20.607321] mngt/ftl_mngt_misc.c: 164:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] First startup needs to scrub nv cache data region, this may take some time. 00:27:01.071 [2024-11-09 16:36:20.607329] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 4GiB 00:27:04.372 [2024-11-09 16:36:23.802929] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.372 [2024-11-09 16:36:23.803004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:04.372 [2024-11-09 16:36:23.803026] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3195.584 ms 00:27:04.372 [2024-11-09 16:36:23.803035] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.372 [2024-11-09 16:36:23.803163] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.372 [2024-11-09 16:36:23.803178] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:04.372 [2024-11-09 16:36:23.803192] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:27:04.372 [2024-11-09 16:36:23.803200] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.372 [2024-11-09 16:36:23.829071] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.372 [2024-11-09 16:36:23.829299] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:04.372 [2024-11-09 16:36:23.829330] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.792 ms 00:27:04.372 [2024-11-09 16:36:23.829340] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.372 [2024-11-09 16:36:23.854644] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.372 [2024-11-09 16:36:23.854689] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:04.372 [2024-11-09 16:36:23.854708] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 25.191 ms 00:27:04.372 [2024-11-09 16:36:23.854715] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.372 [2024-11-09 16:36:23.855069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.372 [2024-11-09 16:36:23.855081] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:04.372 [2024-11-09 16:36:23.855092] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.304 ms 00:27:04.372 [2024-11-09 16:36:23.855102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.372 [2024-11-09 16:36:23.925723] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.372 [2024-11-09 16:36:23.925774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:04.372 [2024-11-09 16:36:23.925791] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 70.576 ms 00:27:04.372 [2024-11-09 16:36:23.925800] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.372 [2024-11-09 16:36:23.953023] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.372 [2024-11-09 16:36:23.953218] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:04.372 [2024-11-09 16:36:23.953259] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 27.165 ms 00:27:04.372 [2024-11-09 16:36:23.953268] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.372 [2024-11-09 16:36:23.954743] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.372 [2024-11-09 16:36:23.954790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:04.372 [2024-11-09 16:36:23.954808] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.397 ms 00:27:04.372 [2024-11-09 16:36:23.954816] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.372 [2024-11-09 16:36:23.981369] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.372 [2024-11-09 16:36:23.981415] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:04.372 [2024-11-09 16:36:23.981429] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.500 ms 00:27:04.372 [2024-11-09 16:36:23.981437] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.372 [2024-11-09 16:36:23.981492] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.372 [2024-11-09 16:36:23.981501] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:04.372 [2024-11-09 16:36:23.981512] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:04.372 [2024-11-09 16:36:23.981520] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.372 [2024-11-09 16:36:23.981617] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.372 [2024-11-09 16:36:23.981628] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:04.372 [2024-11-09 16:36:23.981639] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:04.372 [2024-11-09 16:36:23.981647] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.372 [2024-11-09 16:36:23.982858] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3466.359 ms, result 0 00:27:04.372 { 00:27:04.372 "name": "ftl", 00:27:04.372 "uuid": "298dc075-7548-4667-aff5-e02a8d8d0c87" 00:27:04.372 } 00:27:04.372 16:36:24 -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:04.634 [2024-11-09 16:36:24.193910] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.634 16:36:24 -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:04.896 16:36:24 -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:04.896 [2024-11-09 16:36:24.590366] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:04.896 16:36:24 -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:05.157 [2024-11-09 16:36:24.795839] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:05.157 16:36:24 -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:05.417 Fill FTL, iteration 1 00:27:05.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:05.417 16:36:25 -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:05.417 16:36:25 -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:05.417 16:36:25 -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:05.417 16:36:25 -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:05.417 16:36:25 -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:05.417 16:36:25 -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:05.417 16:36:25 -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:05.417 16:36:25 -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:05.417 16:36:25 -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:05.417 16:36:25 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:05.417 16:36:25 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:05.417 16:36:25 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:05.417 16:36:25 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:05.417 16:36:25 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:05.417 16:36:25 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:05.417 16:36:25 -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:05.417 16:36:25 -- ftl/common.sh@163 -- # spdk_ini_pid=79070 00:27:05.417 16:36:25 -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:05.417 16:36:25 -- ftl/common.sh@165 -- # waitforlisten 79070 /var/tmp/spdk.tgt.sock 00:27:05.417 16:36:25 -- common/autotest_common.sh@829 -- # '[' -z 79070 ']' 00:27:05.417 16:36:25 -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:05.417 16:36:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:05.417 16:36:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:05.417 16:36:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:05.417 16:36:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:05.418 16:36:25 -- common/autotest_common.sh@10 -- # set +x 00:27:05.679 [2024-11-09 16:36:25.207340] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:05.679 [2024-11-09 16:36:25.207480] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79070 ] 00:27:05.679 [2024-11-09 16:36:25.352849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:05.941 [2024-11-09 16:36:25.577876] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:05.941 [2024-11-09 16:36:25.578110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.323 16:36:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:07.323 16:36:26 -- common/autotest_common.sh@862 -- # return 0 00:27:07.323 16:36:26 -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:07.323 ftln1 00:27:07.323 16:36:26 -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:07.323 16:36:26 -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:07.581 16:36:27 -- ftl/common.sh@173 -- # echo ']}' 00:27:07.581 16:36:27 -- ftl/common.sh@176 -- # killprocess 79070 00:27:07.581 16:36:27 -- common/autotest_common.sh@936 -- # '[' -z 79070 ']' 00:27:07.581 16:36:27 -- common/autotest_common.sh@940 -- # kill -0 79070 00:27:07.581 16:36:27 -- common/autotest_common.sh@941 -- # uname 00:27:07.581 16:36:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:07.581 16:36:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79070 00:27:07.581 killing process with pid 79070 00:27:07.581 16:36:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:07.581 16:36:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:07.581 16:36:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79070' 00:27:07.581 16:36:27 -- common/autotest_common.sh@955 -- # kill 79070 00:27:07.581 16:36:27 -- common/autotest_common.sh@960 -- # wait 79070 00:27:08.958 16:36:28 -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:08.958 16:36:28 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:08.958 [2024-11-09 16:36:28.724412] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:08.958 [2024-11-09 16:36:28.724515] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79125 ] 00:27:09.216 [2024-11-09 16:36:28.871827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.475 [2024-11-09 16:36:29.029450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.857  [2024-11-09T16:36:31.571Z] Copying: 231/1024 [MB] (231 MBps) [2024-11-09T16:36:32.517Z] Copying: 468/1024 [MB] (237 MBps) [2024-11-09T16:36:33.461Z] Copying: 707/1024 [MB] (239 MBps) [2024-11-09T16:36:33.722Z] Copying: 954/1024 [MB] (247 MBps) [2024-11-09T16:36:34.666Z] Copying: 1024/1024 [MB] (average 237 MBps) 00:27:14.896 00:27:14.896 16:36:34 -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:14.896 16:36:34 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:14.896 Calculate MD5 checksum, iteration 1 00:27:14.896 16:36:34 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:14.896 16:36:34 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:14.896 16:36:34 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:14.896 16:36:34 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:14.896 16:36:34 -- ftl/common.sh@154 -- # return 0 00:27:14.896 16:36:34 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:14.896 [2024-11-09 16:36:34.406841] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:14.896 [2024-11-09 16:36:34.407518] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79181 ] 00:27:14.896 [2024-11-09 16:36:34.552531] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.158 [2024-11-09 16:36:34.717374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.544  [2024-11-09T16:36:36.887Z] Copying: 634/1024 [MB] (634 MBps) [2024-11-09T16:36:37.460Z] Copying: 1024/1024 [MB] (average 609 MBps) 00:27:17.690 00:27:17.690 16:36:37 -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:17.690 16:36:37 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:20.256 16:36:39 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:20.256 Fill FTL, iteration 2 00:27:20.256 16:36:39 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=20fdeedc55ea6f254f7df3f51791a277 00:27:20.256 16:36:39 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:20.256 16:36:39 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:20.256 16:36:39 -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:20.256 16:36:39 -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:20.256 16:36:39 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:20.256 16:36:39 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:20.256 16:36:39 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:20.256 16:36:39 -- ftl/common.sh@154 -- # return 0 00:27:20.256 16:36:39 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:20.256 [2024-11-09 16:36:39.580809] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:20.256 [2024-11-09 16:36:39.581051] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79242 ] 00:27:20.256 [2024-11-09 16:36:39.729834] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.256 [2024-11-09 16:36:39.901725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:21.640  [2024-11-09T16:36:42.352Z] Copying: 188/1024 [MB] (188 MBps) [2024-11-09T16:36:43.287Z] Copying: 370/1024 [MB] (182 MBps) [2024-11-09T16:36:44.671Z] Copying: 608/1024 [MB] (238 MBps) [2024-11-09T16:36:45.244Z] Copying: 847/1024 [MB] (239 MBps) [2024-11-09T16:36:45.816Z] Copying: 1024/1024 [MB] (average 216 MBps) 00:27:26.046 00:27:26.046 Calculate MD5 checksum, iteration 2 00:27:26.046 16:36:45 -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:26.046 16:36:45 -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:26.046 16:36:45 -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:26.046 16:36:45 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:26.046 16:36:45 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:26.046 16:36:45 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:26.046 16:36:45 -- ftl/common.sh@154 -- # return 0 00:27:26.046 16:36:45 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:26.046 [2024-11-09 16:36:45.719049] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:26.046 [2024-11-09 16:36:45.719320] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79312 ] 00:27:26.307 [2024-11-09 16:36:45.864415] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.307 [2024-11-09 16:36:46.026402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:27.694  [2024-11-09T16:36:48.408Z] Copying: 646/1024 [MB] (646 MBps) [2024-11-09T16:36:49.348Z] Copying: 1024/1024 [MB] (average 634 MBps) 00:27:29.578 00:27:29.578 16:36:49 -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:29.578 16:36:49 -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:31.480 16:36:51 -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:31.480 16:36:51 -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=78610c21268714e0947ba66c4e6a8d43 00:27:31.480 16:36:51 -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:31.480 16:36:51 -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:31.480 16:36:51 -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:31.738 [2024-11-09 16:36:51.333477] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.738 [2024-11-09 16:36:51.333517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:31.738 [2024-11-09 16:36:51.333529] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:31.738 [2024-11-09 16:36:51.333538] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.738 [2024-11-09 16:36:51.333556] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.738 [2024-11-09 16:36:51.333563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:31.738 [2024-11-09 16:36:51.333569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:31.738 [2024-11-09 16:36:51.333574] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.738 [2024-11-09 16:36:51.333589] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.738 [2024-11-09 16:36:51.333596] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:31.738 [2024-11-09 16:36:51.333607] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:31.738 [2024-11-09 16:36:51.333612] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.738 [2024-11-09 16:36:51.333663] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.176 ms, result 0 00:27:31.738 true 00:27:31.738 16:36:51 -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:31.997 { 00:27:31.997 "name": "ftl", 00:27:31.997 "properties": [ 00:27:31.997 { 00:27:31.997 "name": "superblock_version", 00:27:31.997 "value": 5, 00:27:31.997 "read-only": true 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "name": "base_device", 00:27:31.997 "bands": [ 00:27:31.997 { 00:27:31.997 "id": 0, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 1, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 2, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 3, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 4, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 5, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 6, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 7, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 8, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 9, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 10, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 11, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 12, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 13, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 14, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 15, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 16, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 17, 00:27:31.997 "state": "FREE", 00:27:31.997 "validity": 0.0 00:27:31.997 } 00:27:31.997 ], 00:27:31.997 "read-only": true 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "name": "cache_device", 00:27:31.997 "type": "bdev", 00:27:31.997 "chunks": [ 00:27:31.997 { 00:27:31.997 "id": 0, 00:27:31.997 "state": "CLOSED", 00:27:31.997 "utilization": 1.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 1, 00:27:31.997 "state": "CLOSED", 00:27:31.997 "utilization": 1.0 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 2, 00:27:31.997 "state": "OPEN", 00:27:31.997 "utilization": 0.001953125 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "id": 3, 00:27:31.997 "state": "OPEN", 00:27:31.997 "utilization": 0.0 00:27:31.997 } 00:27:31.997 ], 00:27:31.997 "read-only": true 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "name": "verbose_mode", 00:27:31.997 "value": true, 00:27:31.997 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:31.997 }, 00:27:31.997 { 00:27:31.997 "name": "prep_upgrade_on_shutdown", 00:27:31.997 "value": false, 00:27:31.997 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:31.997 } 00:27:31.997 ] 00:27:31.997 } 00:27:31.997 16:36:51 -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:31.997 [2024-11-09 16:36:51.717777] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.997 [2024-11-09 16:36:51.717808] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:31.997 [2024-11-09 16:36:51.717816] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:31.997 [2024-11-09 16:36:51.717822] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.997 [2024-11-09 16:36:51.717838] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.997 [2024-11-09 16:36:51.717844] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:31.997 [2024-11-09 16:36:51.717850] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:31.997 [2024-11-09 16:36:51.717855] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.997 [2024-11-09 16:36:51.717871] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:31.997 [2024-11-09 16:36:51.717876] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:31.997 [2024-11-09 16:36:51.717882] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:31.997 [2024-11-09 16:36:51.717887] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:31.997 [2024-11-09 16:36:51.717928] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.140 ms, result 0 00:27:31.997 true 00:27:31.997 16:36:51 -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:31.997 16:36:51 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:31.997 16:36:51 -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:32.256 16:36:51 -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:32.256 16:36:51 -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:32.256 16:36:51 -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:32.514 [2024-11-09 16:36:52.098133] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.514 [2024-11-09 16:36:52.098167] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:32.514 [2024-11-09 16:36:52.098177] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:32.515 [2024-11-09 16:36:52.098182] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.515 [2024-11-09 16:36:52.098199] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.515 [2024-11-09 16:36:52.098205] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:32.515 [2024-11-09 16:36:52.098210] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:32.515 [2024-11-09 16:36:52.098216] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.515 [2024-11-09 16:36:52.098238] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:32.515 [2024-11-09 16:36:52.098243] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:32.515 [2024-11-09 16:36:52.098249] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:32.515 [2024-11-09 16:36:52.098254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:32.515 [2024-11-09 16:36:52.098298] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.155 ms, result 0 00:27:32.515 true 00:27:32.515 16:36:52 -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:32.515 { 00:27:32.515 "name": "ftl", 00:27:32.515 "properties": [ 00:27:32.515 { 00:27:32.515 "name": "superblock_version", 00:27:32.515 "value": 5, 00:27:32.515 "read-only": true 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "name": "base_device", 00:27:32.515 "bands": [ 00:27:32.515 { 00:27:32.515 "id": 0, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 1, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 2, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 3, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 4, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 5, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 6, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 7, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 8, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 9, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 10, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 11, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 12, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 13, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 14, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 15, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 16, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 17, 00:27:32.515 "state": "FREE", 00:27:32.515 "validity": 0.0 00:27:32.515 } 00:27:32.515 ], 00:27:32.515 "read-only": true 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "name": "cache_device", 00:27:32.515 "type": "bdev", 00:27:32.515 "chunks": [ 00:27:32.515 { 00:27:32.515 "id": 0, 00:27:32.515 "state": "CLOSED", 00:27:32.515 "utilization": 1.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 1, 00:27:32.515 "state": "CLOSED", 00:27:32.515 "utilization": 1.0 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 2, 00:27:32.515 "state": "OPEN", 00:27:32.515 "utilization": 0.001953125 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "id": 3, 00:27:32.515 "state": "OPEN", 00:27:32.515 "utilization": 0.0 00:27:32.515 } 00:27:32.515 ], 00:27:32.515 "read-only": true 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "name": "verbose_mode", 00:27:32.515 "value": true, 00:27:32.515 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:32.515 }, 00:27:32.515 { 00:27:32.515 "name": "prep_upgrade_on_shutdown", 00:27:32.515 "value": true, 00:27:32.515 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:32.515 } 00:27:32.515 ] 00:27:32.515 } 00:27:32.515 16:36:52 -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:32.515 16:36:52 -- ftl/common.sh@130 -- # [[ -n 78945 ]] 00:27:32.515 16:36:52 -- ftl/common.sh@131 -- # killprocess 78945 00:27:32.515 16:36:52 -- common/autotest_common.sh@936 -- # '[' -z 78945 ']' 00:27:32.515 16:36:52 -- common/autotest_common.sh@940 -- # kill -0 78945 00:27:32.515 16:36:52 -- common/autotest_common.sh@941 -- # uname 00:27:32.515 16:36:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:32.515 16:36:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78945 00:27:32.776 16:36:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:32.776 killing process with pid 78945 00:27:32.776 16:36:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:32.776 16:36:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78945' 00:27:32.776 16:36:52 -- common/autotest_common.sh@955 -- # kill 78945 00:27:32.776 16:36:52 -- common/autotest_common.sh@960 -- # wait 78945 00:27:33.350 [2024-11-09 16:36:52.829322] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:27:33.350 [2024-11-09 16:36:52.841505] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.350 [2024-11-09 16:36:52.841538] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:33.350 [2024-11-09 16:36:52.841548] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:33.350 [2024-11-09 16:36:52.841555] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:33.350 [2024-11-09 16:36:52.841573] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:33.350 [2024-11-09 16:36:52.843575] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:33.350 [2024-11-09 16:36:52.843598] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:33.350 [2024-11-09 16:36:52.843606] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.992 ms 00:27:33.350 [2024-11-09 16:36:52.843613] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.630511] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.630557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:43.364 [2024-11-09 16:37:01.630569] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 8786.849 ms 00:27:43.364 [2024-11-09 16:37:01.630579] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.631608] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.631627] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:43.364 [2024-11-09 16:37:01.631634] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.018 ms 00:27:43.364 [2024-11-09 16:37:01.631640] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.632541] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.632557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:27:43.364 [2024-11-09 16:37:01.632564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.881 ms 00:27:43.364 [2024-11-09 16:37:01.632569] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.640165] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.640194] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:43.364 [2024-11-09 16:37:01.640201] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.564 ms 00:27:43.364 [2024-11-09 16:37:01.640207] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.645211] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.645245] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:43.364 [2024-11-09 16:37:01.645255] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.971 ms 00:27:43.364 [2024-11-09 16:37:01.645261] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.645324] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.645332] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:43.364 [2024-11-09 16:37:01.645343] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:43.364 [2024-11-09 16:37:01.645349] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.652208] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.652240] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:27:43.364 [2024-11-09 16:37:01.652247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.847 ms 00:27:43.364 [2024-11-09 16:37:01.652253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.659154] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.659180] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:27:43.364 [2024-11-09 16:37:01.659186] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 6.877 ms 00:27:43.364 [2024-11-09 16:37:01.659192] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.666425] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.666450] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:43.364 [2024-11-09 16:37:01.666457] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.210 ms 00:27:43.364 [2024-11-09 16:37:01.666462] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.673636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.673662] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:43.364 [2024-11-09 16:37:01.673669] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.129 ms 00:27:43.364 [2024-11-09 16:37:01.673674] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.673697] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:43.364 [2024-11-09 16:37:01.673707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:43.364 [2024-11-09 16:37:01.673714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:43.364 [2024-11-09 16:37:01.673720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:43.364 [2024-11-09 16:37:01.673726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:43.364 [2024-11-09 16:37:01.673819] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:43.364 [2024-11-09 16:37:01.673826] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 298dc075-7548-4667-aff5-e02a8d8d0c87 00:27:43.364 [2024-11-09 16:37:01.673831] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:43.364 [2024-11-09 16:37:01.673837] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:43.364 [2024-11-09 16:37:01.673842] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:43.364 [2024-11-09 16:37:01.673848] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:43.364 [2024-11-09 16:37:01.673853] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:43.364 [2024-11-09 16:37:01.673862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:43.364 [2024-11-09 16:37:01.673867] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:43.364 [2024-11-09 16:37:01.673871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:43.364 [2024-11-09 16:37:01.673876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:43.364 [2024-11-09 16:37:01.673881] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.673887] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:43.364 [2024-11-09 16:37:01.673893] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:27:43.364 [2024-11-09 16:37:01.673898] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.683536] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.683563] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:43.364 [2024-11-09 16:37:01.683570] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.614 ms 00:27:43.364 [2024-11-09 16:37:01.683580] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.683725] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:43.364 [2024-11-09 16:37:01.683738] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:43.364 [2024-11-09 16:37:01.683744] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.131 ms 00:27:43.364 [2024-11-09 16:37:01.683749] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.364 [2024-11-09 16:37:01.718760] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.364 [2024-11-09 16:37:01.718789] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:43.365 [2024-11-09 16:37:01.718800] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.365 [2024-11-09 16:37:01.718808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.365 [2024-11-09 16:37:01.718830] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.365 [2024-11-09 16:37:01.718837] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:43.365 [2024-11-09 16:37:01.718842] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.365 [2024-11-09 16:37:01.718848] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.365 [2024-11-09 16:37:01.718892] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.365 [2024-11-09 16:37:01.718900] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:43.365 [2024-11-09 16:37:01.718906] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.365 [2024-11-09 16:37:01.718911] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.365 [2024-11-09 16:37:01.718925] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.365 [2024-11-09 16:37:01.718931] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:43.365 [2024-11-09 16:37:01.718937] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.365 [2024-11-09 16:37:01.718943] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.365 [2024-11-09 16:37:01.778222] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.365 [2024-11-09 16:37:01.778269] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:43.365 [2024-11-09 16:37:01.778279] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.365 [2024-11-09 16:37:01.778289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.365 [2024-11-09 16:37:01.800770] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.365 [2024-11-09 16:37:01.800799] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:43.365 [2024-11-09 16:37:01.800806] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.365 [2024-11-09 16:37:01.800812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.365 [2024-11-09 16:37:01.800858] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.365 [2024-11-09 16:37:01.800865] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:43.365 [2024-11-09 16:37:01.800871] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.365 [2024-11-09 16:37:01.800877] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.365 [2024-11-09 16:37:01.800910] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.365 [2024-11-09 16:37:01.800917] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:43.365 [2024-11-09 16:37:01.800923] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.365 [2024-11-09 16:37:01.800928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.365 [2024-11-09 16:37:01.800997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.365 [2024-11-09 16:37:01.801004] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:43.365 [2024-11-09 16:37:01.801011] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.365 [2024-11-09 16:37:01.801016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.365 [2024-11-09 16:37:01.801038] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.365 [2024-11-09 16:37:01.801047] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:43.365 [2024-11-09 16:37:01.801052] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.365 [2024-11-09 16:37:01.801058] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.365 [2024-11-09 16:37:01.801086] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.365 [2024-11-09 16:37:01.801093] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:43.365 [2024-11-09 16:37:01.801098] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.365 [2024-11-09 16:37:01.801104] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.365 [2024-11-09 16:37:01.801141] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:43.365 [2024-11-09 16:37:01.801148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:43.365 [2024-11-09 16:37:01.801154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:43.365 [2024-11-09 16:37:01.801159] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:43.365 [2024-11-09 16:37:01.801264] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8959.691 ms, result 0 00:27:51.561 16:37:10 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:51.561 16:37:10 -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:51.561 16:37:10 -- ftl/common.sh@81 -- # local base_bdev= 00:27:51.561 16:37:10 -- ftl/common.sh@82 -- # local cache_bdev= 00:27:51.561 16:37:10 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:51.561 16:37:10 -- ftl/common.sh@89 -- # spdk_tgt_pid=79557 00:27:51.561 16:37:10 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:51.561 16:37:10 -- ftl/common.sh@91 -- # waitforlisten 79557 00:27:51.561 16:37:10 -- common/autotest_common.sh@829 -- # '[' -z 79557 ']' 00:27:51.561 16:37:10 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:51.561 16:37:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.561 16:37:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:51.561 16:37:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.561 16:37:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:51.561 16:37:10 -- common/autotest_common.sh@10 -- # set +x 00:27:51.561 [2024-11-09 16:37:10.394374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:51.561 [2024-11-09 16:37:10.394496] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79557 ] 00:27:51.561 [2024-11-09 16:37:10.543191] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.561 [2024-11-09 16:37:10.693353] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:51.561 [2024-11-09 16:37:10.693506] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.561 [2024-11-09 16:37:11.299604] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:51.561 [2024-11-09 16:37:11.299689] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:51.824 [2024-11-09 16:37:11.446149] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.824 [2024-11-09 16:37:11.446210] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:51.824 [2024-11-09 16:37:11.446240] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:51.824 [2024-11-09 16:37:11.446250] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.824 [2024-11-09 16:37:11.446313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.824 [2024-11-09 16:37:11.446326] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:51.824 [2024-11-09 16:37:11.446336] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:27:51.824 [2024-11-09 16:37:11.446344] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.824 [2024-11-09 16:37:11.446368] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:51.824 [2024-11-09 16:37:11.447104] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:51.824 [2024-11-09 16:37:11.447137] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.824 [2024-11-09 16:37:11.447146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:51.824 [2024-11-09 16:37:11.447154] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.774 ms 00:27:51.824 [2024-11-09 16:37:11.447162] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.824 [2024-11-09 16:37:11.448834] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:51.824 [2024-11-09 16:37:11.462690] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.824 [2024-11-09 16:37:11.462739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:51.824 [2024-11-09 16:37:11.462751] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.859 ms 00:27:51.824 [2024-11-09 16:37:11.462759] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.824 [2024-11-09 16:37:11.462941] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.824 [2024-11-09 16:37:11.462965] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:51.824 [2024-11-09 16:37:11.462975] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:51.824 [2024-11-09 16:37:11.462984] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.824 [2024-11-09 16:37:11.470823] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.824 [2024-11-09 16:37:11.470867] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:51.824 [2024-11-09 16:37:11.470877] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.755 ms 00:27:51.824 [2024-11-09 16:37:11.470890] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.824 [2024-11-09 16:37:11.470934] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.824 [2024-11-09 16:37:11.470942] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:51.824 [2024-11-09 16:37:11.470951] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:51.824 [2024-11-09 16:37:11.470959] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.824 [2024-11-09 16:37:11.471002] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.824 [2024-11-09 16:37:11.471012] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:51.824 [2024-11-09 16:37:11.471020] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:51.824 [2024-11-09 16:37:11.471028] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.824 [2024-11-09 16:37:11.471059] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:51.824 [2024-11-09 16:37:11.475266] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.824 [2024-11-09 16:37:11.475305] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:51.824 [2024-11-09 16:37:11.475319] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.218 ms 00:27:51.824 [2024-11-09 16:37:11.475326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.824 [2024-11-09 16:37:11.475364] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.824 [2024-11-09 16:37:11.475372] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:51.824 [2024-11-09 16:37:11.475381] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:51.824 [2024-11-09 16:37:11.475388] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.824 [2024-11-09 16:37:11.475434] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:51.824 [2024-11-09 16:37:11.475456] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:27:51.824 [2024-11-09 16:37:11.475492] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:51.824 [2024-11-09 16:37:11.475512] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:27:51.824 [2024-11-09 16:37:11.475589] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:27:51.824 [2024-11-09 16:37:11.475600] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:51.824 [2024-11-09 16:37:11.475611] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:27:51.824 [2024-11-09 16:37:11.475622] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:51.824 [2024-11-09 16:37:11.475631] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:51.824 [2024-11-09 16:37:11.475640] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:51.824 [2024-11-09 16:37:11.475651] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:51.824 [2024-11-09 16:37:11.475658] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:27:51.824 [2024-11-09 16:37:11.475669] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:27:51.824 [2024-11-09 16:37:11.475677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.824 [2024-11-09 16:37:11.475684] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:51.824 [2024-11-09 16:37:11.475692] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.245 ms 00:27:51.824 [2024-11-09 16:37:11.475700] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.824 [2024-11-09 16:37:11.475765] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.824 [2024-11-09 16:37:11.475774] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:51.824 [2024-11-09 16:37:11.475781] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:27:51.824 [2024-11-09 16:37:11.475788] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.824 [2024-11-09 16:37:11.475867] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:51.824 [2024-11-09 16:37:11.475877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:51.824 [2024-11-09 16:37:11.475886] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:51.824 [2024-11-09 16:37:11.475894] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.824 [2024-11-09 16:37:11.475902] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:51.824 [2024-11-09 16:37:11.475909] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:51.824 [2024-11-09 16:37:11.475916] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:51.824 [2024-11-09 16:37:11.475923] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:51.824 [2024-11-09 16:37:11.475929] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:51.824 [2024-11-09 16:37:11.475936] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.824 [2024-11-09 16:37:11.475943] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:51.824 [2024-11-09 16:37:11.475950] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:51.824 [2024-11-09 16:37:11.475956] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.824 [2024-11-09 16:37:11.475964] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:51.824 [2024-11-09 16:37:11.475971] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:27:51.824 [2024-11-09 16:37:11.475978] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.824 [2024-11-09 16:37:11.475984] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:51.824 [2024-11-09 16:37:11.475992] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:27:51.824 [2024-11-09 16:37:11.475999] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.824 [2024-11-09 16:37:11.476005] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:27:51.824 [2024-11-09 16:37:11.476013] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:27:51.825 [2024-11-09 16:37:11.476020] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:27:51.825 [2024-11-09 16:37:11.476027] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:51.825 [2024-11-09 16:37:11.476041] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:51.825 [2024-11-09 16:37:11.476048] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:51.825 [2024-11-09 16:37:11.476055] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:51.825 [2024-11-09 16:37:11.476062] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:27:51.825 [2024-11-09 16:37:11.476070] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:51.825 [2024-11-09 16:37:11.476076] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:51.825 [2024-11-09 16:37:11.476083] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:51.825 [2024-11-09 16:37:11.476090] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:51.825 [2024-11-09 16:37:11.476097] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:51.825 [2024-11-09 16:37:11.476103] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:27:51.825 [2024-11-09 16:37:11.476110] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:27:51.825 [2024-11-09 16:37:11.476116] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:51.825 [2024-11-09 16:37:11.476122] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:51.825 [2024-11-09 16:37:11.476129] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.825 [2024-11-09 16:37:11.476135] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:51.825 [2024-11-09 16:37:11.476141] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:27:51.825 [2024-11-09 16:37:11.476147] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.825 [2024-11-09 16:37:11.476153] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:51.825 [2024-11-09 16:37:11.476161] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:51.825 [2024-11-09 16:37:11.476167] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:51.825 [2024-11-09 16:37:11.476175] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:51.825 [2024-11-09 16:37:11.476182] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:51.825 [2024-11-09 16:37:11.476188] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:51.825 [2024-11-09 16:37:11.476195] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:51.825 [2024-11-09 16:37:11.476202] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:51.825 [2024-11-09 16:37:11.476208] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:51.825 [2024-11-09 16:37:11.476215] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:51.825 [2024-11-09 16:37:11.476242] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:51.825 [2024-11-09 16:37:11.476253] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.825 [2024-11-09 16:37:11.476266] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:51.825 [2024-11-09 16:37:11.476274] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:27:51.825 [2024-11-09 16:37:11.476281] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:27:51.825 [2024-11-09 16:37:11.476295] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:27:51.825 [2024-11-09 16:37:11.476303] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:27:51.825 [2024-11-09 16:37:11.476318] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:27:51.825 [2024-11-09 16:37:11.476326] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:27:51.825 [2024-11-09 16:37:11.476334] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:27:51.825 [2024-11-09 16:37:11.476341] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:27:51.825 [2024-11-09 16:37:11.476348] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:27:51.825 [2024-11-09 16:37:11.476356] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:27:51.825 [2024-11-09 16:37:11.476364] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:27:51.825 [2024-11-09 16:37:11.476372] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:27:51.825 [2024-11-09 16:37:11.476379] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:51.825 [2024-11-09 16:37:11.476388] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:51.825 [2024-11-09 16:37:11.476396] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:51.825 [2024-11-09 16:37:11.476404] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:51.825 [2024-11-09 16:37:11.476411] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:51.825 [2024-11-09 16:37:11.476419] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:51.825 [2024-11-09 16:37:11.476426] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.825 [2024-11-09 16:37:11.476434] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:51.825 [2024-11-09 16:37:11.476441] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.604 ms 00:27:51.825 [2024-11-09 16:37:11.476449] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.825 [2024-11-09 16:37:11.494279] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.825 [2024-11-09 16:37:11.494324] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:51.825 [2024-11-09 16:37:11.494337] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.782 ms 00:27:51.825 [2024-11-09 16:37:11.494346] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.825 [2024-11-09 16:37:11.494391] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.825 [2024-11-09 16:37:11.494401] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:51.825 [2024-11-09 16:37:11.494410] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:51.825 [2024-11-09 16:37:11.494419] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.825 [2024-11-09 16:37:11.529102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.825 [2024-11-09 16:37:11.529146] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:51.825 [2024-11-09 16:37:11.529157] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 34.622 ms 00:27:51.825 [2024-11-09 16:37:11.529166] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.825 [2024-11-09 16:37:11.529204] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.825 [2024-11-09 16:37:11.529212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:51.825 [2024-11-09 16:37:11.529220] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:51.825 [2024-11-09 16:37:11.529241] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.825 [2024-11-09 16:37:11.529820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.825 [2024-11-09 16:37:11.529871] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:51.825 [2024-11-09 16:37:11.529881] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.529 ms 00:27:51.825 [2024-11-09 16:37:11.529889] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.825 [2024-11-09 16:37:11.529935] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.825 [2024-11-09 16:37:11.529944] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:51.825 [2024-11-09 16:37:11.529952] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:51.825 [2024-11-09 16:37:11.529960] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.825 [2024-11-09 16:37:11.547849] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.825 [2024-11-09 16:37:11.547889] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:51.825 [2024-11-09 16:37:11.547900] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 17.863 ms 00:27:51.826 [2024-11-09 16:37:11.547908] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.826 [2024-11-09 16:37:11.562217] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:51.826 [2024-11-09 16:37:11.562274] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:51.826 [2024-11-09 16:37:11.562286] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.826 [2024-11-09 16:37:11.562294] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:51.826 [2024-11-09 16:37:11.562304] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.267 ms 00:27:51.826 [2024-11-09 16:37:11.562320] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.826 [2024-11-09 16:37:11.577345] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.826 [2024-11-09 16:37:11.577388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:51.826 [2024-11-09 16:37:11.577399] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.973 ms 00:27:51.826 [2024-11-09 16:37:11.577407] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.826 [2024-11-09 16:37:11.590065] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.826 [2024-11-09 16:37:11.590107] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:51.826 [2024-11-09 16:37:11.590117] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.605 ms 00:27:51.826 [2024-11-09 16:37:11.590125] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.084 [2024-11-09 16:37:11.602749] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.084 [2024-11-09 16:37:11.602790] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:52.084 [2024-11-09 16:37:11.602801] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.577 ms 00:27:52.084 [2024-11-09 16:37:11.602808] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.084 [2024-11-09 16:37:11.603205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.084 [2024-11-09 16:37:11.603241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:52.084 [2024-11-09 16:37:11.603251] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.291 ms 00:27:52.084 [2024-11-09 16:37:11.603260] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.084 [2024-11-09 16:37:11.668923] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.085 [2024-11-09 16:37:11.668980] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:52.085 [2024-11-09 16:37:11.668995] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 65.643 ms 00:27:52.085 [2024-11-09 16:37:11.669004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.085 [2024-11-09 16:37:11.679621] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:52.085 [2024-11-09 16:37:11.680475] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.085 [2024-11-09 16:37:11.680513] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:52.085 [2024-11-09 16:37:11.680522] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 11.413 ms 00:27:52.085 [2024-11-09 16:37:11.680534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.085 [2024-11-09 16:37:11.680597] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.085 [2024-11-09 16:37:11.680606] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:52.085 [2024-11-09 16:37:11.680613] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:52.085 [2024-11-09 16:37:11.680620] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.085 [2024-11-09 16:37:11.680665] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.085 [2024-11-09 16:37:11.680674] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:52.085 [2024-11-09 16:37:11.680681] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:52.085 [2024-11-09 16:37:11.680688] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.085 [2024-11-09 16:37:11.681888] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.085 [2024-11-09 16:37:11.681933] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:27:52.085 [2024-11-09 16:37:11.681942] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.178 ms 00:27:52.085 [2024-11-09 16:37:11.681949] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.085 [2024-11-09 16:37:11.681983] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.085 [2024-11-09 16:37:11.681991] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:52.085 [2024-11-09 16:37:11.681998] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:52.085 [2024-11-09 16:37:11.682004] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.085 [2024-11-09 16:37:11.682037] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:52.085 [2024-11-09 16:37:11.682046] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.085 [2024-11-09 16:37:11.682055] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:52.085 [2024-11-09 16:37:11.682062] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:52.085 [2024-11-09 16:37:11.682068] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.085 [2024-11-09 16:37:11.701291] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.085 [2024-11-09 16:37:11.701331] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:52.085 [2024-11-09 16:37:11.701341] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 19.206 ms 00:27:52.085 [2024-11-09 16:37:11.701348] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.085 [2024-11-09 16:37:11.701421] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.085 [2024-11-09 16:37:11.701428] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:52.085 [2024-11-09 16:37:11.701435] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:52.085 [2024-11-09 16:37:11.701441] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.085 [2024-11-09 16:37:11.702477] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 255.962 ms, result 0 00:27:52.085 [2024-11-09 16:37:11.717591] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.085 [2024-11-09 16:37:11.733588] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:27:52.085 [2024-11-09 16:37:11.741695] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:52.344 16:37:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:52.344 16:37:11 -- common/autotest_common.sh@862 -- # return 0 00:27:52.344 16:37:11 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:52.344 16:37:11 -- ftl/common.sh@95 -- # return 0 00:27:52.344 16:37:11 -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:52.344 [2024-11-09 16:37:12.082485] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.344 [2024-11-09 16:37:12.082518] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:52.344 [2024-11-09 16:37:12.082528] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:52.344 [2024-11-09 16:37:12.082534] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.344 [2024-11-09 16:37:12.082551] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.344 [2024-11-09 16:37:12.082558] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:52.344 [2024-11-09 16:37:12.082564] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:52.344 [2024-11-09 16:37:12.082572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.344 [2024-11-09 16:37:12.082587] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:52.344 [2024-11-09 16:37:12.082594] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:52.344 [2024-11-09 16:37:12.082599] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:52.344 [2024-11-09 16:37:12.082605] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:52.344 [2024-11-09 16:37:12.082648] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.157 ms, result 0 00:27:52.344 true 00:27:52.344 16:37:12 -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:52.602 { 00:27:52.602 "name": "ftl", 00:27:52.602 "properties": [ 00:27:52.602 { 00:27:52.602 "name": "superblock_version", 00:27:52.602 "value": 5, 00:27:52.602 "read-only": true 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "name": "base_device", 00:27:52.602 "bands": [ 00:27:52.602 { 00:27:52.602 "id": 0, 00:27:52.602 "state": "CLOSED", 00:27:52.602 "validity": 1.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 1, 00:27:52.602 "state": "CLOSED", 00:27:52.602 "validity": 1.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 2, 00:27:52.602 "state": "CLOSED", 00:27:52.602 "validity": 0.007843137254901933 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 3, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 4, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 5, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 6, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 7, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 8, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 9, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 10, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 11, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 12, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 13, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 14, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 15, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 16, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 17, 00:27:52.602 "state": "FREE", 00:27:52.602 "validity": 0.0 00:27:52.602 } 00:27:52.602 ], 00:27:52.602 "read-only": true 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "name": "cache_device", 00:27:52.602 "type": "bdev", 00:27:52.602 "chunks": [ 00:27:52.602 { 00:27:52.602 "id": 0, 00:27:52.602 "state": "OPEN", 00:27:52.602 "utilization": 0.0 00:27:52.602 }, 00:27:52.602 { 00:27:52.602 "id": 1, 00:27:52.602 "state": "OPEN", 00:27:52.603 "utilization": 0.0 00:27:52.603 }, 00:27:52.603 { 00:27:52.603 "id": 2, 00:27:52.603 "state": "FREE", 00:27:52.603 "utilization": 0.0 00:27:52.603 }, 00:27:52.603 { 00:27:52.603 "id": 3, 00:27:52.603 "state": "FREE", 00:27:52.603 "utilization": 0.0 00:27:52.603 } 00:27:52.603 ], 00:27:52.603 "read-only": true 00:27:52.603 }, 00:27:52.603 { 00:27:52.603 "name": "verbose_mode", 00:27:52.603 "value": true, 00:27:52.603 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:52.603 }, 00:27:52.603 { 00:27:52.603 "name": "prep_upgrade_on_shutdown", 00:27:52.603 "value": false, 00:27:52.603 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:52.603 } 00:27:52.603 ] 00:27:52.603 } 00:27:52.603 16:37:12 -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:52.603 16:37:12 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:52.603 16:37:12 -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:52.861 16:37:12 -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:52.861 16:37:12 -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:52.861 16:37:12 -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:52.861 16:37:12 -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:52.861 16:37:12 -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:53.120 Validate MD5 checksum, iteration 1 00:27:53.120 16:37:12 -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:53.120 16:37:12 -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:53.120 16:37:12 -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:53.120 16:37:12 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:53.120 16:37:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:53.120 16:37:12 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:53.120 16:37:12 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:53.120 16:37:12 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:53.120 16:37:12 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:53.120 16:37:12 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:53.120 16:37:12 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:53.120 16:37:12 -- ftl/common.sh@154 -- # return 0 00:27:53.120 16:37:12 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:53.120 [2024-11-09 16:37:12.733478] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:53.120 [2024-11-09 16:37:12.733899] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79596 ] 00:27:53.120 [2024-11-09 16:37:12.879838] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.378 [2024-11-09 16:37:13.019874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.752  [2024-11-09T16:37:15.089Z] Copying: 741/1024 [MB] (741 MBps) [2024-11-09T16:37:16.025Z] Copying: 1024/1024 [MB] (average 711 MBps) 00:27:56.255 00:27:56.255 16:37:15 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:56.255 16:37:15 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:58.180 16:37:17 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:58.180 Validate MD5 checksum, iteration 2 00:27:58.180 16:37:17 -- ftl/upgrade_shutdown.sh@103 -- # sum=20fdeedc55ea6f254f7df3f51791a277 00:27:58.180 16:37:17 -- ftl/upgrade_shutdown.sh@105 -- # [[ 20fdeedc55ea6f254f7df3f51791a277 != \2\0\f\d\e\e\d\c\5\5\e\a\6\f\2\5\4\f\7\d\f\3\f\5\1\7\9\1\a\2\7\7 ]] 00:27:58.180 16:37:17 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:58.180 16:37:17 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:58.180 16:37:17 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:58.180 16:37:17 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:58.180 16:37:17 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:58.180 16:37:17 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:58.180 16:37:17 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:58.180 16:37:17 -- ftl/common.sh@154 -- # return 0 00:27:58.180 16:37:17 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:58.180 [2024-11-09 16:37:17.926712] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:27:58.180 [2024-11-09 16:37:17.926795] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79657 ] 00:27:58.439 [2024-11-09 16:37:18.067654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.439 [2024-11-09 16:37:18.208246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.340  [2024-11-09T16:37:20.369Z] Copying: 637/1024 [MB] (637 MBps) [2024-11-09T16:37:24.566Z] Copying: 1024/1024 [MB] (average 640 MBps) 00:28:04.796 00:28:04.796 16:37:24 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:04.796 16:37:24 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:06.699 16:37:26 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:06.699 16:37:26 -- ftl/upgrade_shutdown.sh@103 -- # sum=78610c21268714e0947ba66c4e6a8d43 00:28:06.699 16:37:26 -- ftl/upgrade_shutdown.sh@105 -- # [[ 78610c21268714e0947ba66c4e6a8d43 != \7\8\6\1\0\c\2\1\2\6\8\7\1\4\e\0\9\4\7\b\a\6\6\c\4\e\6\a\8\d\4\3 ]] 00:28:06.699 16:37:26 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:06.699 16:37:26 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:06.699 16:37:26 -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:06.699 16:37:26 -- ftl/common.sh@137 -- # [[ -n 79557 ]] 00:28:06.699 16:37:26 -- ftl/common.sh@138 -- # kill -9 79557 00:28:06.699 16:37:26 -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:06.699 16:37:26 -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:06.699 16:37:26 -- ftl/common.sh@81 -- # local base_bdev= 00:28:06.699 16:37:26 -- ftl/common.sh@82 -- # local cache_bdev= 00:28:06.699 16:37:26 -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:06.699 16:37:26 -- ftl/common.sh@89 -- # spdk_tgt_pid=79745 00:28:06.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:06.699 16:37:26 -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:06.699 16:37:26 -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:06.699 16:37:26 -- ftl/common.sh@91 -- # waitforlisten 79745 00:28:06.699 16:37:26 -- common/autotest_common.sh@829 -- # '[' -z 79745 ']' 00:28:06.699 16:37:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:06.699 16:37:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:06.699 16:37:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:06.699 16:37:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:06.699 16:37:26 -- common/autotest_common.sh@10 -- # set +x 00:28:06.699 [2024-11-09 16:37:26.146528] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:06.699 [2024-11-09 16:37:26.146641] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79745 ] 00:28:06.699 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 828: 79557 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:06.699 [2024-11-09 16:37:26.294983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.962 [2024-11-09 16:37:26.483516] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:06.962 [2024-11-09 16:37:26.483727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:07.533 [2024-11-09 16:37:27.206694] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:07.533 [2024-11-09 16:37:27.206778] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:07.795 [2024-11-09 16:37:27.344981] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.795 [2024-11-09 16:37:27.345026] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:07.795 [2024-11-09 16:37:27.345039] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:07.795 [2024-11-09 16:37:27.345047] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.795 [2024-11-09 16:37:27.345099] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.795 [2024-11-09 16:37:27.345111] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:07.795 [2024-11-09 16:37:27.345120] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:07.795 [2024-11-09 16:37:27.345127] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.795 [2024-11-09 16:37:27.345155] mngt/ftl_mngt_bdev.c: 195:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:07.795 [2024-11-09 16:37:27.345890] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:07.795 [2024-11-09 16:37:27.346120] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.795 [2024-11-09 16:37:27.346133] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:07.795 [2024-11-09 16:37:27.346142] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.968 ms 00:28:07.795 [2024-11-09 16:37:27.346149] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.795 [2024-11-09 16:37:27.346532] mngt/ftl_mngt_md.c: 452:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:07.795 [2024-11-09 16:37:27.363377] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.795 [2024-11-09 16:37:27.363412] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:07.795 [2024-11-09 16:37:27.363424] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.845 ms 00:28:07.795 [2024-11-09 16:37:27.363432] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.795 [2024-11-09 16:37:27.372365] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.795 [2024-11-09 16:37:27.372395] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:07.795 [2024-11-09 16:37:27.372404] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:28:07.795 [2024-11-09 16:37:27.372411] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.795 [2024-11-09 16:37:27.372776] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.795 [2024-11-09 16:37:27.372795] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:07.795 [2024-11-09 16:37:27.372804] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.290 ms 00:28:07.795 [2024-11-09 16:37:27.372812] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.795 [2024-11-09 16:37:27.372854] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.795 [2024-11-09 16:37:27.372863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:07.795 [2024-11-09 16:37:27.372873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:07.795 [2024-11-09 16:37:27.372881] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.795 [2024-11-09 16:37:27.372904] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.795 [2024-11-09 16:37:27.372918] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:07.795 [2024-11-09 16:37:27.372926] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:07.795 [2024-11-09 16:37:27.372933] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.795 [2024-11-09 16:37:27.372959] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:07.795 [2024-11-09 16:37:27.376105] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.795 [2024-11-09 16:37:27.376130] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:07.795 [2024-11-09 16:37:27.376139] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 3.156 ms 00:28:07.795 [2024-11-09 16:37:27.376146] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.795 [2024-11-09 16:37:27.376176] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.795 [2024-11-09 16:37:27.376187] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:07.795 [2024-11-09 16:37:27.376195] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:07.795 [2024-11-09 16:37:27.376202] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.795 [2024-11-09 16:37:27.376237] ftl_layout.c: 605:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:07.795 [2024-11-09 16:37:27.376256] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x138 bytes 00:28:07.795 [2024-11-09 16:37:27.376288] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:07.795 [2024-11-09 16:37:27.376304] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x140 bytes 00:28:07.795 [2024-11-09 16:37:27.376377] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x138 bytes 00:28:07.795 [2024-11-09 16:37:27.376394] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:07.795 [2024-11-09 16:37:27.376403] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x140 bytes 00:28:07.795 [2024-11-09 16:37:27.376413] ftl_layout.c: 676:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:07.796 [2024-11-09 16:37:27.376421] ftl_layout.c: 678:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:07.796 [2024-11-09 16:37:27.376429] ftl_layout.c: 680:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:07.796 [2024-11-09 16:37:27.376436] ftl_layout.c: 681:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:07.796 [2024-11-09 16:37:27.376443] ftl_layout.c: 682:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 1024 00:28:07.796 [2024-11-09 16:37:27.376449] ftl_layout.c: 683:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 4 00:28:07.796 [2024-11-09 16:37:27.376457] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.796 [2024-11-09 16:37:27.376464] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:07.796 [2024-11-09 16:37:27.376475] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.237 ms 00:28:07.796 [2024-11-09 16:37:27.376482] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.796 [2024-11-09 16:37:27.376544] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.796 [2024-11-09 16:37:27.376557] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:07.796 [2024-11-09 16:37:27.376565] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:28:07.796 [2024-11-09 16:37:27.376572] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.796 [2024-11-09 16:37:27.376660] ftl_layout.c: 759:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:07.796 [2024-11-09 16:37:27.376671] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:07.796 [2024-11-09 16:37:27.376680] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:07.796 [2024-11-09 16:37:27.376690] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.796 [2024-11-09 16:37:27.376697] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:07.796 [2024-11-09 16:37:27.376705] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:07.796 [2024-11-09 16:37:27.376713] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:07.796 [2024-11-09 16:37:27.376720] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:07.796 [2024-11-09 16:37:27.376727] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:07.796 [2024-11-09 16:37:27.376733] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.796 [2024-11-09 16:37:27.376741] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:07.796 [2024-11-09 16:37:27.376748] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:07.796 [2024-11-09 16:37:27.376759] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.796 [2024-11-09 16:37:27.376765] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:07.796 [2024-11-09 16:37:27.376772] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.12 MiB 00:28:07.796 [2024-11-09 16:37:27.376778] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.796 [2024-11-09 16:37:27.376785] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:07.796 [2024-11-09 16:37:27.376792] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.25 MiB 00:28:07.796 [2024-11-09 16:37:27.376799] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.796 [2024-11-09 16:37:27.376805] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_nvc 00:28:07.796 [2024-11-09 16:37:27.376812] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.38 MiB 00:28:07.796 [2024-11-09 16:37:27.376819] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4096.00 MiB 00:28:07.796 [2024-11-09 16:37:27.376826] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:07.796 [2024-11-09 16:37:27.376832] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:07.796 [2024-11-09 16:37:27.376850] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:07.796 [2024-11-09 16:37:27.376856] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:07.796 [2024-11-09 16:37:27.376863] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18.88 MiB 00:28:07.796 [2024-11-09 16:37:27.376870] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:07.796 [2024-11-09 16:37:27.376877] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:07.796 [2024-11-09 16:37:27.376883] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:07.796 [2024-11-09 16:37:27.376889] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:07.796 [2024-11-09 16:37:27.376896] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:07.796 [2024-11-09 16:37:27.376905] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 26.88 MiB 00:28:07.796 [2024-11-09 16:37:27.376911] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 4.00 MiB 00:28:07.796 [2024-11-09 16:37:27.376918] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:07.796 [2024-11-09 16:37:27.376926] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:07.796 [2024-11-09 16:37:27.376933] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.796 [2024-11-09 16:37:27.376940] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:07.796 [2024-11-09 16:37:27.376946] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 31.00 MiB 00:28:07.796 [2024-11-09 16:37:27.376954] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.796 [2024-11-09 16:37:27.376960] ftl_layout.c: 766:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:07.796 [2024-11-09 16:37:27.376967] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:07.796 [2024-11-09 16:37:27.376975] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:07.796 [2024-11-09 16:37:27.376982] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:07.796 [2024-11-09 16:37:27.376990] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:07.796 [2024-11-09 16:37:27.376998] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:07.796 [2024-11-09 16:37:27.377005] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:07.796 [2024-11-09 16:37:27.377011] ftl_layout.c: 115:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:07.796 [2024-11-09 16:37:27.377018] ftl_layout.c: 116:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:07.796 [2024-11-09 16:37:27.377025] ftl_layout.c: 118:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:07.796 [2024-11-09 16:37:27.377032] upgrade/ftl_sb_v5.c: 407:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:07.796 [2024-11-09 16:37:27.377041] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.796 [2024-11-09 16:37:27.377050] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:07.796 [2024-11-09 16:37:27.377058] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:1 blk_offs:0xea0 blk_sz:0x20 00:28:07.796 [2024-11-09 16:37:27.377065] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:1 blk_offs:0xec0 blk_sz:0x20 00:28:07.796 [2024-11-09 16:37:27.377077] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:1 blk_offs:0xee0 blk_sz:0x400 00:28:07.796 [2024-11-09 16:37:27.377084] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:1 blk_offs:0x12e0 blk_sz:0x400 00:28:07.796 [2024-11-09 16:37:27.377091] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:1 blk_offs:0x16e0 blk_sz:0x400 00:28:07.796 [2024-11-09 16:37:27.377099] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:1 blk_offs:0x1ae0 blk_sz:0x400 00:28:07.796 [2024-11-09 16:37:27.377106] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x1ee0 blk_sz:0x20 00:28:07.796 [2024-11-09 16:37:27.377113] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x1f00 blk_sz:0x20 00:28:07.796 [2024-11-09 16:37:27.377121] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:1 blk_offs:0x1f20 blk_sz:0x20 00:28:07.796 [2024-11-09 16:37:27.377128] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:1 blk_offs:0x1f40 blk_sz:0x20 00:28:07.796 [2024-11-09 16:37:27.377136] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x8 ver:0 blk_offs:0x1f60 blk_sz:0x100000 00:28:07.796 [2024-11-09 16:37:27.377145] upgrade/ftl_sb_v5.c: 415:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x101f60 blk_sz:0x3e0a0 00:28:07.796 [2024-11-09 16:37:27.377152] upgrade/ftl_sb_v5.c: 421:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:07.796 [2024-11-09 16:37:27.377161] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:07.796 [2024-11-09 16:37:27.377169] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:07.796 [2024-11-09 16:37:27.377176] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:07.796 [2024-11-09 16:37:27.377184] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:07.796 [2024-11-09 16:37:27.377191] upgrade/ftl_sb_v5.c: 429:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:07.796 [2024-11-09 16:37:27.377198] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.796 [2024-11-09 16:37:27.377206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:07.796 [2024-11-09 16:37:27.377218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.581 ms 00:28:07.796 [2024-11-09 16:37:27.377237] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.796 [2024-11-09 16:37:27.390916] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.796 [2024-11-09 16:37:27.390943] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:07.796 [2024-11-09 16:37:27.390955] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 13.637 ms 00:28:07.796 [2024-11-09 16:37:27.390963] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.796 [2024-11-09 16:37:27.390997] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.796 [2024-11-09 16:37:27.391005] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:07.796 [2024-11-09 16:37:27.391013] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:07.796 [2024-11-09 16:37:27.391020] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.796 [2024-11-09 16:37:27.421748] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.796 [2024-11-09 16:37:27.421776] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:07.796 [2024-11-09 16:37:27.421785] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 30.684 ms 00:28:07.796 [2024-11-09 16:37:27.421792] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.796 [2024-11-09 16:37:27.421820] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.796 [2024-11-09 16:37:27.421828] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:07.797 [2024-11-09 16:37:27.421836] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:07.797 [2024-11-09 16:37:27.421843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.421930] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.797 [2024-11-09 16:37:27.421940] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:07.797 [2024-11-09 16:37:27.421949] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:28:07.797 [2024-11-09 16:37:27.421956] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.421991] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.797 [2024-11-09 16:37:27.422001] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:07.797 [2024-11-09 16:37:27.422009] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:07.797 [2024-11-09 16:37:27.422016] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.436885] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.797 [2024-11-09 16:37:27.436912] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:07.797 [2024-11-09 16:37:27.436921] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 14.849 ms 00:28:07.797 [2024-11-09 16:37:27.436928] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.437018] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.797 [2024-11-09 16:37:27.437028] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:07.797 [2024-11-09 16:37:27.437036] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:07.797 [2024-11-09 16:37:27.437044] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.453940] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.797 [2024-11-09 16:37:27.453973] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:07.797 [2024-11-09 16:37:27.453985] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 16.876 ms 00:28:07.797 [2024-11-09 16:37:27.453997] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.462918] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.797 [2024-11-09 16:37:27.462957] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:07.797 [2024-11-09 16:37:27.462967] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.269 ms 00:28:07.797 [2024-11-09 16:37:27.462975] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.522346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.797 [2024-11-09 16:37:27.522388] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:07.797 [2024-11-09 16:37:27.522401] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 59.322 ms 00:28:07.797 [2024-11-09 16:37:27.522409] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.522507] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:07.797 [2024-11-09 16:37:27.522549] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:07.797 [2024-11-09 16:37:27.522588] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:07.797 [2024-11-09 16:37:27.522627] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:07.797 [2024-11-09 16:37:27.522636] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.797 [2024-11-09 16:37:27.522646] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:07.797 [2024-11-09 16:37:27.522657] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:28:07.797 [2024-11-09 16:37:27.522664] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.522720] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:07.797 [2024-11-09 16:37:27.522732] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.797 [2024-11-09 16:37:27.522739] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:07.797 [2024-11-09 16:37:27.522747] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:07.797 [2024-11-09 16:37:27.522754] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.538600] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.797 [2024-11-09 16:37:27.538638] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:07.797 [2024-11-09 16:37:27.538650] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 15.824 ms 00:28:07.797 [2024-11-09 16:37:27.538657] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.547342] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.797 [2024-11-09 16:37:27.547375] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:07.797 [2024-11-09 16:37:27.547386] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:07.797 [2024-11-09 16:37:27.547396] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.547451] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:07.797 [2024-11-09 16:37:27.547461] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover unmap map 00:28:07.797 [2024-11-09 16:37:27.547469] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:07.797 [2024-11-09 16:37:27.547477] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:07.797 [2024-11-09 16:37:27.547648] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 8032, seq id 14 00:28:08.740 [2024-11-09 16:37:28.151674] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 8032, seq id 14 00:28:08.740 [2024-11-09 16:37:28.151819] ftl_nv_cache.c:2273:ftl_mngt_nv_cache_recover_open_chunk: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 270176, seq id 15 00:28:09.001 [2024-11-09 16:37:28.744371] ftl_nv_cache.c:2210:recover_open_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 270176, seq id 15 00:28:09.001 [2024-11-09 16:37:28.744478] ftl_nv_cache.c:1543:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:09.001 [2024-11-09 16:37:28.744493] ftl_nv_cache.c:1547:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:09.001 [2024-11-09 16:37:28.744506] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.001 [2024-11-09 16:37:28.744517] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:09.001 [2024-11-09 16:37:28.744533] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1197.004 ms 00:28:09.001 [2024-11-09 16:37:28.744543] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.001 [2024-11-09 16:37:28.744591] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.001 [2024-11-09 16:37:28.744602] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:09.001 [2024-11-09 16:37:28.744612] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:09.001 [2024-11-09 16:37:28.744621] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.001 [2024-11-09 16:37:28.756696] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:09.001 [2024-11-09 16:37:28.756819] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.001 [2024-11-09 16:37:28.756830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:09.002 [2024-11-09 16:37:28.756854] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 12.180 ms 00:28:09.002 [2024-11-09 16:37:28.756862] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.002 [2024-11-09 16:37:28.757579] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.002 [2024-11-09 16:37:28.757601] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from SHM 00:28:09.002 [2024-11-09 16:37:28.757611] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.639 ms 00:28:09.002 [2024-11-09 16:37:28.757618] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.002 [2024-11-09 16:37:28.759844] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.002 [2024-11-09 16:37:28.759863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:09.002 [2024-11-09 16:37:28.759873] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.208 ms 00:28:09.002 [2024-11-09 16:37:28.759882] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.263 [2024-11-09 16:37:28.786035] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.263 [2024-11-09 16:37:28.786077] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Complete unmap transaction 00:28:09.263 [2024-11-09 16:37:28.786090] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 26.128 ms 00:28:09.263 [2024-11-09 16:37:28.786098] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.263 [2024-11-09 16:37:28.786217] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.263 [2024-11-09 16:37:28.786246] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:09.263 [2024-11-09 16:37:28.786257] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:09.263 [2024-11-09 16:37:28.786265] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.263 [2024-11-09 16:37:28.787673] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.263 [2024-11-09 16:37:28.787711] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Free P2L region bufs 00:28:09.263 [2024-11-09 16:37:28.787721] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.389 ms 00:28:09.263 [2024-11-09 16:37:28.787730] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.263 [2024-11-09 16:37:28.787766] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.263 [2024-11-09 16:37:28.787775] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:09.263 [2024-11-09 16:37:28.787784] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:09.263 [2024-11-09 16:37:28.787791] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.263 [2024-11-09 16:37:28.787840] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:09.263 [2024-11-09 16:37:28.787852] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.263 [2024-11-09 16:37:28.787863] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:09.263 [2024-11-09 16:37:28.787872] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:28:09.263 [2024-11-09 16:37:28.787880] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.263 [2024-11-09 16:37:28.787938] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:09.263 [2024-11-09 16:37:28.787947] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:09.263 [2024-11-09 16:37:28.787956] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:28:09.263 [2024-11-09 16:37:28.787965] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:09.263 [2024-11-09 16:37:28.789031] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1443.560 ms, result 0 00:28:09.263 [2024-11-09 16:37:28.802429] tcp.c: 661:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.263 [2024-11-09 16:37:28.818424] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_0 00:28:09.263 [2024-11-09 16:37:28.826605] tcp.c: 953:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:09.523 16:37:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:09.523 16:37:29 -- common/autotest_common.sh@862 -- # return 0 00:28:09.523 16:37:29 -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:09.523 16:37:29 -- ftl/common.sh@95 -- # return 0 00:28:09.523 16:37:29 -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:09.523 16:37:29 -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:09.523 16:37:29 -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:09.523 16:37:29 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:09.523 Validate MD5 checksum, iteration 1 00:28:09.523 16:37:29 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:09.523 16:37:29 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:09.523 16:37:29 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:09.523 16:37:29 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:09.523 16:37:29 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:09.523 16:37:29 -- ftl/common.sh@154 -- # return 0 00:28:09.523 16:37:29 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:09.523 [2024-11-09 16:37:29.141033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:09.523 [2024-11-09 16:37:29.141149] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79790 ] 00:28:09.781 [2024-11-09 16:37:29.293245] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:09.781 [2024-11-09 16:37:29.462524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:11.692  [2024-11-09T16:37:31.462Z] Copying: 778/1024 [MB] (778 MBps) [2024-11-09T16:37:33.998Z] Copying: 1024/1024 [MB] (average 755 MBps) 00:28:14.228 00:28:14.228 16:37:33 -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:14.228 16:37:33 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:16.127 16:37:35 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:16.127 16:37:35 -- ftl/upgrade_shutdown.sh@103 -- # sum=20fdeedc55ea6f254f7df3f51791a277 00:28:16.127 16:37:35 -- ftl/upgrade_shutdown.sh@105 -- # [[ 20fdeedc55ea6f254f7df3f51791a277 != \2\0\f\d\e\e\d\c\5\5\e\a\6\f\2\5\4\f\7\d\f\3\f\5\1\7\9\1\a\2\7\7 ]] 00:28:16.127 16:37:35 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:16.127 16:37:35 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:16.127 Validate MD5 checksum, iteration 2 00:28:16.127 16:37:35 -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:16.127 16:37:35 -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:16.127 16:37:35 -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:16.127 16:37:35 -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:16.127 16:37:35 -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:16.127 16:37:35 -- ftl/common.sh@154 -- # return 0 00:28:16.127 16:37:35 -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:16.127 [2024-11-09 16:37:35.766390] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:16.127 [2024-11-09 16:37:35.766503] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79867 ] 00:28:16.386 [2024-11-09 16:37:35.912561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:16.386 [2024-11-09 16:37:36.050817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:17.767  [2024-11-09T16:37:38.481Z] Copying: 532/1024 [MB] (532 MBps) [2024-11-09T16:37:41.015Z] Copying: 1024/1024 [MB] (average 544 MBps) 00:28:21.245 00:28:21.245 16:37:40 -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:21.245 16:37:40 -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:22.619 16:37:42 -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:22.619 16:37:42 -- ftl/upgrade_shutdown.sh@103 -- # sum=78610c21268714e0947ba66c4e6a8d43 00:28:22.619 16:37:42 -- ftl/upgrade_shutdown.sh@105 -- # [[ 78610c21268714e0947ba66c4e6a8d43 != \7\8\6\1\0\c\2\1\2\6\8\7\1\4\e\0\9\4\7\b\a\6\6\c\4\e\6\a\8\d\4\3 ]] 00:28:22.619 16:37:42 -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:22.619 16:37:42 -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:22.619 16:37:42 -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:22.619 16:37:42 -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:22.620 16:37:42 -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:22.620 16:37:42 -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:22.620 16:37:42 -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:22.620 16:37:42 -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:22.620 16:37:42 -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:22.620 16:37:42 -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:22.620 16:37:42 -- ftl/common.sh@130 -- # [[ -n 79745 ]] 00:28:22.620 16:37:42 -- ftl/common.sh@131 -- # killprocess 79745 00:28:22.620 16:37:42 -- common/autotest_common.sh@936 -- # '[' -z 79745 ']' 00:28:22.620 16:37:42 -- common/autotest_common.sh@940 -- # kill -0 79745 00:28:22.620 16:37:42 -- common/autotest_common.sh@941 -- # uname 00:28:22.620 16:37:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:22.620 16:37:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79745 00:28:22.620 16:37:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:22.620 killing process with pid 79745 00:28:22.620 16:37:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:22.620 16:37:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79745' 00:28:22.620 16:37:42 -- common/autotest_common.sh@955 -- # kill 79745 00:28:22.620 16:37:42 -- common/autotest_common.sh@960 -- # wait 79745 00:28:23.187 [2024-11-09 16:37:42.901467] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_0 00:28:23.187 [2024-11-09 16:37:42.913554] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.187 [2024-11-09 16:37:42.913585] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:23.187 [2024-11-09 16:37:42.913595] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:23.187 [2024-11-09 16:37:42.913601] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.187 [2024-11-09 16:37:42.913620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:23.187 [2024-11-09 16:37:42.915677] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.187 [2024-11-09 16:37:42.915696] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:23.187 [2024-11-09 16:37:42.915704] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 2.046 ms 00:28:23.187 [2024-11-09 16:37:42.915711] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.187 [2024-11-09 16:37:42.915902] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.187 [2024-11-09 16:37:42.915910] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:23.187 [2024-11-09 16:37:42.915916] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.170 ms 00:28:23.187 [2024-11-09 16:37:42.915922] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.187 [2024-11-09 16:37:42.917209] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.187 [2024-11-09 16:37:42.917238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:23.187 [2024-11-09 16:37:42.917246] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 1.275 ms 00:28:23.187 [2024-11-09 16:37:42.917251] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.187 [2024-11-09 16:37:42.918102] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.187 [2024-11-09 16:37:42.918120] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P unmaps 00:28:23.187 [2024-11-09 16:37:42.918127] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.827 ms 00:28:23.187 [2024-11-09 16:37:42.918133] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.187 [2024-11-09 16:37:42.925806] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.187 [2024-11-09 16:37:42.925830] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:23.187 [2024-11-09 16:37:42.925837] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.637 ms 00:28:23.187 [2024-11-09 16:37:42.925843] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.187 [2024-11-09 16:37:42.930155] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.187 [2024-11-09 16:37:42.930179] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:23.187 [2024-11-09 16:37:42.930187] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 4.282 ms 00:28:23.187 [2024-11-09 16:37:42.930193] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.187 [2024-11-09 16:37:42.930263] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.187 [2024-11-09 16:37:42.930271] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:23.187 [2024-11-09 16:37:42.930278] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:28:23.187 [2024-11-09 16:37:42.930284] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.187 [2024-11-09 16:37:42.937727] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.187 [2024-11-09 16:37:42.937748] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist band info metadata 00:28:23.187 [2024-11-09 16:37:42.937755] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.430 ms 00:28:23.187 [2024-11-09 16:37:42.937761] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.187 [2024-11-09 16:37:42.945294] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.187 [2024-11-09 16:37:42.945314] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: persist trim metadata 00:28:23.187 [2024-11-09 16:37:42.945321] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.508 ms 00:28:23.187 [2024-11-09 16:37:42.945326] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.187 [2024-11-09 16:37:42.953214] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.187 [2024-11-09 16:37:42.953241] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:23.187 [2024-11-09 16:37:42.953248] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.863 ms 00:28:23.187 [2024-11-09 16:37:42.953253] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.446 [2024-11-09 16:37:42.961069] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.446 [2024-11-09 16:37:42.961090] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:23.446 [2024-11-09 16:37:42.961097] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 7.769 ms 00:28:23.446 [2024-11-09 16:37:42.961102] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.446 [2024-11-09 16:37:42.961127] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:23.446 [2024-11-09 16:37:42.961138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:23.446 [2024-11-09 16:37:42.961149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:23.446 [2024-11-09 16:37:42.961156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:23.446 [2024-11-09 16:37:42.961162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:23.446 [2024-11-09 16:37:42.961263] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:23.446 [2024-11-09 16:37:42.961269] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 298dc075-7548-4667-aff5-e02a8d8d0c87 00:28:23.446 [2024-11-09 16:37:42.961275] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:23.447 [2024-11-09 16:37:42.961280] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:23.447 [2024-11-09 16:37:42.961286] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:23.447 [2024-11-09 16:37:42.961293] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:23.447 [2024-11-09 16:37:42.961298] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:23.447 [2024-11-09 16:37:42.961305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:23.447 [2024-11-09 16:37:42.961311] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:23.447 [2024-11-09 16:37:42.961316] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:23.447 [2024-11-09 16:37:42.961321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:23.447 [2024-11-09 16:37:42.961328] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.447 [2024-11-09 16:37:42.961334] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:23.447 [2024-11-09 16:37:42.961340] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.202 ms 00:28:23.447 [2024-11-09 16:37:42.961347] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:42.971019] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.447 [2024-11-09 16:37:42.971041] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:23.447 [2024-11-09 16:37:42.971048] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 9.658 ms 00:28:23.447 [2024-11-09 16:37:42.971054] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:42.971195] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:23.447 [2024-11-09 16:37:42.971206] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:23.447 [2024-11-09 16:37:42.971211] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.126 ms 00:28:23.447 [2024-11-09 16:37:42.971217] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.006173] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.447 [2024-11-09 16:37:43.006195] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:23.447 [2024-11-09 16:37:43.006203] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.447 [2024-11-09 16:37:43.006209] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.006244] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.447 [2024-11-09 16:37:43.006254] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:23.447 [2024-11-09 16:37:43.006261] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.447 [2024-11-09 16:37:43.006267] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.006313] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.447 [2024-11-09 16:37:43.006321] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:23.447 [2024-11-09 16:37:43.006327] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.447 [2024-11-09 16:37:43.006333] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.006346] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.447 [2024-11-09 16:37:43.006353] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:23.447 [2024-11-09 16:37:43.006359] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.447 [2024-11-09 16:37:43.006367] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.065205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.447 [2024-11-09 16:37:43.065238] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:23.447 [2024-11-09 16:37:43.065247] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.447 [2024-11-09 16:37:43.065254] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.088125] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.447 [2024-11-09 16:37:43.088148] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:23.447 [2024-11-09 16:37:43.088158] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.447 [2024-11-09 16:37:43.088164] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.088205] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.447 [2024-11-09 16:37:43.088212] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:23.447 [2024-11-09 16:37:43.088218] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.447 [2024-11-09 16:37:43.088238] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.088271] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.447 [2024-11-09 16:37:43.088278] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:23.447 [2024-11-09 16:37:43.088283] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.447 [2024-11-09 16:37:43.088289] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.088359] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.447 [2024-11-09 16:37:43.088367] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:23.447 [2024-11-09 16:37:43.088373] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.447 [2024-11-09 16:37:43.088378] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.088402] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.447 [2024-11-09 16:37:43.088408] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:23.447 [2024-11-09 16:37:43.088414] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.447 [2024-11-09 16:37:43.088420] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.088450] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.447 [2024-11-09 16:37:43.088456] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:23.447 [2024-11-09 16:37:43.088462] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.447 [2024-11-09 16:37:43.088468] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.088500] mngt/ftl_mngt.c: 406:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:23.447 [2024-11-09 16:37:43.088507] mngt/ftl_mngt.c: 407:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:23.447 [2024-11-09 16:37:43.088513] mngt/ftl_mngt.c: 409:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:23.447 [2024-11-09 16:37:43.088518] mngt/ftl_mngt.c: 410:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:23.447 [2024-11-09 16:37:43.088612] mngt/ftl_mngt.c: 434:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 175.039 ms, result 0 00:28:24.016 16:37:43 -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:24.016 16:37:43 -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:24.016 16:37:43 -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:24.016 16:37:43 -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:24.016 16:37:43 -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:24.016 16:37:43 -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:24.016 16:37:43 -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:24.016 Remove shared memory files 00:28:24.016 16:37:43 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:24.016 16:37:43 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:24.016 16:37:43 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:24.016 16:37:43 -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid79557 00:28:24.016 16:37:43 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:24.016 16:37:43 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:24.016 ************************************ 00:28:24.016 END TEST ftl_upgrade_shutdown 00:28:24.016 ************************************ 00:28:24.016 00:28:24.016 real 1m27.299s 00:28:24.016 user 1m59.092s 00:28:24.016 sys 0m19.446s 00:28:24.016 16:37:43 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:24.016 16:37:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.276 16:37:43 -- ftl/ftl.sh@82 -- # '[' -eq 1 ']' 00:28:24.276 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 82: [: -eq: unary operator expected 00:28:24.276 16:37:43 -- ftl/ftl.sh@89 -- # '[' -eq 1 ']' 00:28:24.276 /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh: line 89: [: -eq: unary operator expected 00:28:24.276 16:37:43 -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:24.276 16:37:43 -- ftl/ftl.sh@14 -- # killprocess 70698 00:28:24.276 16:37:43 -- common/autotest_common.sh@936 -- # '[' -z 70698 ']' 00:28:24.276 16:37:43 -- common/autotest_common.sh@940 -- # kill -0 70698 00:28:24.276 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (70698) - No such process 00:28:24.276 Process with pid 70698 is not found 00:28:24.276 16:37:43 -- common/autotest_common.sh@963 -- # echo 'Process with pid 70698 is not found' 00:28:24.276 16:37:43 -- ftl/ftl.sh@17 -- # [[ -n 0000:00:07.0 ]] 00:28:24.276 16:37:43 -- ftl/ftl.sh@19 -- # spdk_tgt_pid=79988 00:28:24.276 16:37:43 -- ftl/ftl.sh@20 -- # waitforlisten 79988 00:28:24.276 16:37:43 -- common/autotest_common.sh@829 -- # '[' -z 79988 ']' 00:28:24.276 16:37:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:24.276 16:37:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:24.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:24.276 16:37:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:24.276 16:37:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:24.276 16:37:43 -- common/autotest_common.sh@10 -- # set +x 00:28:24.276 16:37:43 -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:24.276 [2024-11-09 16:37:43.868580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 23.11.0 initialization... 00:28:24.276 [2024-11-09 16:37:43.868795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79988 ] 00:28:24.276 [2024-11-09 16:37:44.022395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.534 [2024-11-09 16:37:44.166221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:24.534 [2024-11-09 16:37:44.166390] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:25.101 16:37:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:25.101 16:37:44 -- common/autotest_common.sh@862 -- # return 0 00:28:25.101 16:37:44 -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:07.0 00:28:25.362 nvme0n1 00:28:25.362 16:37:44 -- ftl/ftl.sh@22 -- # clear_lvols 00:28:25.362 16:37:44 -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:25.362 16:37:44 -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:25.362 16:37:45 -- ftl/common.sh@28 -- # stores=615191f5-59cd-4d75-865b-b017097f4e9a 00:28:25.362 16:37:45 -- ftl/common.sh@29 -- # for lvs in $stores 00:28:25.362 16:37:45 -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 615191f5-59cd-4d75-865b-b017097f4e9a 00:28:25.624 16:37:45 -- ftl/ftl.sh@23 -- # killprocess 79988 00:28:25.624 16:37:45 -- common/autotest_common.sh@936 -- # '[' -z 79988 ']' 00:28:25.624 16:37:45 -- common/autotest_common.sh@940 -- # kill -0 79988 00:28:25.624 16:37:45 -- common/autotest_common.sh@941 -- # uname 00:28:25.624 16:37:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:25.624 16:37:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79988 00:28:25.624 16:37:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:25.624 16:37:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:25.624 killing process with pid 79988 00:28:25.624 16:37:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79988' 00:28:25.624 16:37:45 -- common/autotest_common.sh@955 -- # kill 79988 00:28:25.624 16:37:45 -- common/autotest_common.sh@960 -- # wait 79988 00:28:27.005 16:37:46 -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:27.005 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:27.265 Waiting for block devices as requested 00:28:27.265 0000:00:09.0 (1b36 0010): uio_pci_generic -> nvme 00:28:27.265 0000:00:08.0 (1b36 0010): uio_pci_generic -> nvme 00:28:27.265 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:28:27.526 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:28:32.857 * Events for some block/disk devices (0000:00:09.0) were not caught, they may be missing 00:28:32.857 16:37:52 -- ftl/ftl.sh@28 -- # remove_shm 00:28:32.857 Remove shared memory files 00:28:32.857 16:37:52 -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:32.857 16:37:52 -- ftl/common.sh@205 -- # rm -f rm -f 00:28:32.857 16:37:52 -- ftl/common.sh@206 -- # rm -f rm -f 00:28:32.857 16:37:52 -- ftl/common.sh@207 -- # rm -f rm -f 00:28:32.857 16:37:52 -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:32.857 16:37:52 -- ftl/common.sh@209 -- # rm -f rm -f 00:28:32.857 00:28:32.857 real 13m29.856s 00:28:32.857 user 15m30.802s 00:28:32.857 sys 1m33.157s 00:28:32.857 16:37:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:32.857 ************************************ 00:28:32.857 END TEST ftl 00:28:32.857 ************************************ 00:28:32.857 16:37:52 -- common/autotest_common.sh@10 -- # set +x 00:28:32.857 16:37:52 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:28:32.857 16:37:52 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:28:32.857 16:37:52 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:32.857 16:37:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:32.857 16:37:52 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:28:32.857 16:37:52 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:28:32.857 16:37:52 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:28:32.857 16:37:52 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:28:32.857 16:37:52 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:28:32.857 16:37:52 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:28:32.857 16:37:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:32.857 16:37:52 -- common/autotest_common.sh@10 -- # set +x 00:28:32.857 16:37:52 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:28:32.857 16:37:52 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:28:32.857 16:37:52 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:28:32.857 16:37:52 -- common/autotest_common.sh@10 -- # set +x 00:28:34.242 INFO: APP EXITING 00:28:34.242 INFO: killing all VMs 00:28:34.242 INFO: killing vhost app 00:28:34.242 INFO: EXIT DONE 00:28:34.814 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:34.814 0000:00:09.0 (1b36 0010): Already using the nvme driver 00:28:34.814 0000:00:08.0 (1b36 0010): Already using the nvme driver 00:28:34.814 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:28:34.814 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:28:35.758 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:35.758 Cleaning 00:28:35.758 Removing: /var/run/dpdk/spdk0/config 00:28:35.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:35.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:35.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:35.758 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:35.758 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:35.758 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:35.758 Removing: /var/run/dpdk/spdk0 00:28:35.758 Removing: /var/run/dpdk/spdk_pid55958 00:28:35.758 Removing: /var/run/dpdk/spdk_pid56159 00:28:35.758 Removing: /var/run/dpdk/spdk_pid56453 00:28:35.758 Removing: /var/run/dpdk/spdk_pid56544 00:28:35.758 Removing: /var/run/dpdk/spdk_pid56628 00:28:35.758 Removing: /var/run/dpdk/spdk_pid56738 00:28:35.758 Removing: /var/run/dpdk/spdk_pid56836 00:28:35.758 Removing: /var/run/dpdk/spdk_pid56880 00:28:35.758 Removing: /var/run/dpdk/spdk_pid56912 00:28:35.758 Removing: /var/run/dpdk/spdk_pid56987 00:28:35.758 Removing: /var/run/dpdk/spdk_pid57093 00:28:35.758 Removing: /var/run/dpdk/spdk_pid57512 00:28:35.758 Removing: /var/run/dpdk/spdk_pid57578 00:28:35.758 Removing: /var/run/dpdk/spdk_pid57643 00:28:35.758 Removing: /var/run/dpdk/spdk_pid57659 00:28:35.758 Removing: /var/run/dpdk/spdk_pid57752 00:28:35.758 Removing: /var/run/dpdk/spdk_pid57768 00:28:35.758 Removing: /var/run/dpdk/spdk_pid57866 00:28:35.758 Removing: /var/run/dpdk/spdk_pid57890 00:28:35.758 Removing: /var/run/dpdk/spdk_pid57943 00:28:35.758 Removing: /var/run/dpdk/spdk_pid57961 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58014 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58032 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58182 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58224 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58312 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58384 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58415 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58482 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58508 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58549 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58575 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58616 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58641 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58678 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58698 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58739 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58765 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58806 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58832 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58886 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58912 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58953 00:28:35.758 Removing: /var/run/dpdk/spdk_pid58979 00:28:35.758 Removing: /var/run/dpdk/spdk_pid59021 00:28:35.758 Removing: /var/run/dpdk/spdk_pid59049 00:28:35.758 Removing: /var/run/dpdk/spdk_pid59085 00:28:35.758 Removing: /var/run/dpdk/spdk_pid59111 00:28:35.758 Removing: /var/run/dpdk/spdk_pid59152 00:28:35.758 Removing: /var/run/dpdk/spdk_pid59182 00:28:35.758 Removing: /var/run/dpdk/spdk_pid59230 00:28:35.758 Removing: /var/run/dpdk/spdk_pid59256 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59297 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59323 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59364 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59388 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59423 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59449 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59490 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59516 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59563 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59592 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59636 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59670 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59714 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59740 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59787 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59813 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59855 00:28:35.759 Removing: /var/run/dpdk/spdk_pid59944 00:28:35.759 Removing: /var/run/dpdk/spdk_pid60056 00:28:35.759 Removing: /var/run/dpdk/spdk_pid60215 00:28:35.759 Removing: /var/run/dpdk/spdk_pid60312 00:28:35.759 Removing: /var/run/dpdk/spdk_pid60354 00:28:35.759 Removing: /var/run/dpdk/spdk_pid60808 00:28:35.759 Removing: /var/run/dpdk/spdk_pid61123 00:28:35.759 Removing: /var/run/dpdk/spdk_pid61233 00:28:35.759 Removing: /var/run/dpdk/spdk_pid61286 00:28:35.759 Removing: /var/run/dpdk/spdk_pid61317 00:28:35.759 Removing: /var/run/dpdk/spdk_pid61400 00:28:35.759 Removing: /var/run/dpdk/spdk_pid62052 00:28:35.759 Removing: /var/run/dpdk/spdk_pid62094 00:28:35.759 Removing: /var/run/dpdk/spdk_pid62552 00:28:35.759 Removing: /var/run/dpdk/spdk_pid62674 00:28:35.759 Removing: /var/run/dpdk/spdk_pid62783 00:28:35.759 Removing: /var/run/dpdk/spdk_pid62837 00:28:35.759 Removing: /var/run/dpdk/spdk_pid62862 00:28:35.759 Removing: /var/run/dpdk/spdk_pid62888 00:28:35.759 Removing: /var/run/dpdk/spdk_pid64831 00:28:35.759 Removing: /var/run/dpdk/spdk_pid64965 00:28:36.020 Removing: /var/run/dpdk/spdk_pid64974 00:28:36.020 Removing: /var/run/dpdk/spdk_pid64986 00:28:36.020 Removing: /var/run/dpdk/spdk_pid65056 00:28:36.020 Removing: /var/run/dpdk/spdk_pid65060 00:28:36.020 Removing: /var/run/dpdk/spdk_pid65077 00:28:36.020 Removing: /var/run/dpdk/spdk_pid65133 00:28:36.020 Removing: /var/run/dpdk/spdk_pid65137 00:28:36.020 Removing: /var/run/dpdk/spdk_pid65149 00:28:36.020 Removing: /var/run/dpdk/spdk_pid65210 00:28:36.020 Removing: /var/run/dpdk/spdk_pid65214 00:28:36.020 Removing: /var/run/dpdk/spdk_pid65231 00:28:36.020 Removing: /var/run/dpdk/spdk_pid66690 00:28:36.020 Removing: /var/run/dpdk/spdk_pid66803 00:28:36.021 Removing: /var/run/dpdk/spdk_pid66934 00:28:36.021 Removing: /var/run/dpdk/spdk_pid67033 00:28:36.021 Removing: /var/run/dpdk/spdk_pid67120 00:28:36.021 Removing: /var/run/dpdk/spdk_pid67196 00:28:36.021 Removing: /var/run/dpdk/spdk_pid67301 00:28:36.021 Removing: /var/run/dpdk/spdk_pid67375 00:28:36.021 Removing: /var/run/dpdk/spdk_pid67516 00:28:36.021 Removing: /var/run/dpdk/spdk_pid67896 00:28:36.021 Removing: /var/run/dpdk/spdk_pid67933 00:28:36.021 Removing: /var/run/dpdk/spdk_pid68385 00:28:36.021 Removing: /var/run/dpdk/spdk_pid68575 00:28:36.021 Removing: /var/run/dpdk/spdk_pid68677 00:28:36.021 Removing: /var/run/dpdk/spdk_pid68792 00:28:36.021 Removing: /var/run/dpdk/spdk_pid68846 00:28:36.021 Removing: /var/run/dpdk/spdk_pid68877 00:28:36.021 Removing: /var/run/dpdk/spdk_pid69192 00:28:36.021 Removing: /var/run/dpdk/spdk_pid69254 00:28:36.021 Removing: /var/run/dpdk/spdk_pid69329 00:28:36.021 Removing: /var/run/dpdk/spdk_pid69728 00:28:36.021 Removing: /var/run/dpdk/spdk_pid69883 00:28:36.021 Removing: /var/run/dpdk/spdk_pid70698 00:28:36.021 Removing: /var/run/dpdk/spdk_pid70829 00:28:36.021 Removing: /var/run/dpdk/spdk_pid71035 00:28:36.021 Removing: /var/run/dpdk/spdk_pid71149 00:28:36.021 Removing: /var/run/dpdk/spdk_pid71440 00:28:36.021 Removing: /var/run/dpdk/spdk_pid71683 00:28:36.021 Removing: /var/run/dpdk/spdk_pid72070 00:28:36.021 Removing: /var/run/dpdk/spdk_pid72284 00:28:36.021 Removing: /var/run/dpdk/spdk_pid72489 00:28:36.021 Removing: /var/run/dpdk/spdk_pid72536 00:28:36.021 Removing: /var/run/dpdk/spdk_pid72736 00:28:36.021 Removing: /var/run/dpdk/spdk_pid72772 00:28:36.021 Removing: /var/run/dpdk/spdk_pid72828 00:28:36.021 Removing: /var/run/dpdk/spdk_pid73125 00:28:36.021 Removing: /var/run/dpdk/spdk_pid73380 00:28:36.021 Removing: /var/run/dpdk/spdk_pid73996 00:28:36.021 Removing: /var/run/dpdk/spdk_pid74763 00:28:36.021 Removing: /var/run/dpdk/spdk_pid75357 00:28:36.021 Removing: /var/run/dpdk/spdk_pid76179 00:28:36.021 Removing: /var/run/dpdk/spdk_pid76334 00:28:36.021 Removing: /var/run/dpdk/spdk_pid76422 00:28:36.021 Removing: /var/run/dpdk/spdk_pid76988 00:28:36.021 Removing: /var/run/dpdk/spdk_pid77046 00:28:36.021 Removing: /var/run/dpdk/spdk_pid77600 00:28:36.021 Removing: /var/run/dpdk/spdk_pid78102 00:28:36.021 Removing: /var/run/dpdk/spdk_pid78945 00:28:36.021 Removing: /var/run/dpdk/spdk_pid79070 00:28:36.021 Removing: /var/run/dpdk/spdk_pid79125 00:28:36.021 Removing: /var/run/dpdk/spdk_pid79181 00:28:36.021 Removing: /var/run/dpdk/spdk_pid79242 00:28:36.021 Removing: /var/run/dpdk/spdk_pid79312 00:28:36.021 Removing: /var/run/dpdk/spdk_pid79557 00:28:36.021 Removing: /var/run/dpdk/spdk_pid79596 00:28:36.021 Removing: /var/run/dpdk/spdk_pid79657 00:28:36.021 Removing: /var/run/dpdk/spdk_pid79745 00:28:36.021 Removing: /var/run/dpdk/spdk_pid79790 00:28:36.021 Removing: /var/run/dpdk/spdk_pid79867 00:28:36.021 Removing: /var/run/dpdk/spdk_pid79988 00:28:36.021 Clean 00:28:36.282 killing process with pid 48167 00:28:36.282 killing process with pid 48168 00:28:36.282 16:37:55 -- common/autotest_common.sh@1446 -- # return 0 00:28:36.282 16:37:55 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:28:36.282 16:37:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:36.282 16:37:55 -- common/autotest_common.sh@10 -- # set +x 00:28:36.282 16:37:55 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:28:36.282 16:37:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:36.282 16:37:55 -- common/autotest_common.sh@10 -- # set +x 00:28:36.282 16:37:55 -- spdk/autotest.sh@377 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:36.282 16:37:55 -- spdk/autotest.sh@379 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:36.282 16:37:55 -- spdk/autotest.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:36.282 16:37:55 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:28:36.282 16:37:55 -- spdk/autotest.sh@383 -- # hostname 00:28:36.282 16:37:55 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:36.543 geninfo: WARNING: invalid characters removed from testname! 00:29:03.143 16:38:19 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:03.143 16:38:22 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:05.690 16:38:25 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:07.603 16:38:27 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:09.518 16:38:29 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:10.890 16:38:30 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:12.794 16:38:32 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:12.794 16:38:32 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:29:12.794 16:38:32 -- common/autotest_common.sh@1690 -- $ lcov --version 00:29:12.794 16:38:32 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:29:12.794 16:38:32 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:29:12.794 16:38:32 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:29:12.794 16:38:32 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:29:12.794 16:38:32 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:29:12.794 16:38:32 -- scripts/common.sh@335 -- $ IFS=.-: 00:29:12.794 16:38:32 -- scripts/common.sh@335 -- $ read -ra ver1 00:29:12.794 16:38:32 -- scripts/common.sh@336 -- $ IFS=.-: 00:29:12.794 16:38:32 -- scripts/common.sh@336 -- $ read -ra ver2 00:29:12.794 16:38:32 -- scripts/common.sh@337 -- $ local 'op=<' 00:29:12.794 16:38:32 -- scripts/common.sh@339 -- $ ver1_l=2 00:29:12.794 16:38:32 -- scripts/common.sh@340 -- $ ver2_l=1 00:29:12.794 16:38:32 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:29:12.794 16:38:32 -- scripts/common.sh@343 -- $ case "$op" in 00:29:12.794 16:38:32 -- scripts/common.sh@344 -- $ : 1 00:29:12.794 16:38:32 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:29:12.794 16:38:32 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:12.794 16:38:32 -- scripts/common.sh@364 -- $ decimal 1 00:29:12.794 16:38:32 -- scripts/common.sh@352 -- $ local d=1 00:29:12.794 16:38:32 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:29:12.794 16:38:32 -- scripts/common.sh@354 -- $ echo 1 00:29:12.794 16:38:32 -- scripts/common.sh@364 -- $ ver1[v]=1 00:29:12.794 16:38:32 -- scripts/common.sh@365 -- $ decimal 2 00:29:12.794 16:38:32 -- scripts/common.sh@352 -- $ local d=2 00:29:12.794 16:38:32 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:29:12.794 16:38:32 -- scripts/common.sh@354 -- $ echo 2 00:29:12.794 16:38:32 -- scripts/common.sh@365 -- $ ver2[v]=2 00:29:12.794 16:38:32 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:29:12.794 16:38:32 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:29:12.794 16:38:32 -- scripts/common.sh@367 -- $ return 0 00:29:12.794 16:38:32 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:12.794 16:38:32 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:29:12.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.794 --rc genhtml_branch_coverage=1 00:29:12.794 --rc genhtml_function_coverage=1 00:29:12.794 --rc genhtml_legend=1 00:29:12.794 --rc geninfo_all_blocks=1 00:29:12.794 --rc geninfo_unexecuted_blocks=1 00:29:12.794 00:29:12.794 ' 00:29:12.794 16:38:32 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:29:12.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.794 --rc genhtml_branch_coverage=1 00:29:12.794 --rc genhtml_function_coverage=1 00:29:12.794 --rc genhtml_legend=1 00:29:12.795 --rc geninfo_all_blocks=1 00:29:12.795 --rc geninfo_unexecuted_blocks=1 00:29:12.795 00:29:12.795 ' 00:29:12.795 16:38:32 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:29:12.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.795 --rc genhtml_branch_coverage=1 00:29:12.795 --rc genhtml_function_coverage=1 00:29:12.795 --rc genhtml_legend=1 00:29:12.795 --rc geninfo_all_blocks=1 00:29:12.795 --rc geninfo_unexecuted_blocks=1 00:29:12.795 00:29:12.795 ' 00:29:12.795 16:38:32 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:29:12.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:12.795 --rc genhtml_branch_coverage=1 00:29:12.795 --rc genhtml_function_coverage=1 00:29:12.795 --rc genhtml_legend=1 00:29:12.795 --rc geninfo_all_blocks=1 00:29:12.795 --rc geninfo_unexecuted_blocks=1 00:29:12.795 00:29:12.795 ' 00:29:12.795 16:38:32 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:12.795 16:38:32 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:12.795 16:38:32 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.795 16:38:32 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.795 16:38:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.795 16:38:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.795 16:38:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.795 16:38:32 -- paths/export.sh@5 -- $ export PATH 00:29:12.795 16:38:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.795 16:38:32 -- common/autobuild_common.sh@439 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:12.795 16:38:32 -- common/autobuild_common.sh@440 -- $ date +%s 00:29:12.795 16:38:32 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1731170312.XXXXXX 00:29:12.795 16:38:32 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1731170312.2LWIe6 00:29:12.795 16:38:32 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:29:12.795 16:38:32 -- common/autobuild_common.sh@446 -- $ '[' -n '' ']' 00:29:12.795 16:38:32 -- common/autobuild_common.sh@449 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:12.795 16:38:32 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:12.795 16:38:32 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:12.795 16:38:32 -- common/autobuild_common.sh@456 -- $ get_config_params 00:29:12.795 16:38:32 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:29:12.795 16:38:32 -- common/autotest_common.sh@10 -- $ set +x 00:29:12.795 16:38:32 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:29:12.795 16:38:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:29:12.795 16:38:32 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:29:12.795 16:38:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:12.795 16:38:32 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:29:12.795 16:38:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:12.795 16:38:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:12.795 16:38:32 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:12.795 16:38:32 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:12.795 16:38:32 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:12.795 16:38:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:12.795 + [[ -n 4988 ]] 00:29:12.795 + sudo kill 4988 00:29:12.806 [Pipeline] } 00:29:12.821 [Pipeline] // timeout 00:29:12.826 [Pipeline] } 00:29:12.840 [Pipeline] // stage 00:29:12.846 [Pipeline] } 00:29:12.860 [Pipeline] // catchError 00:29:12.869 [Pipeline] stage 00:29:12.872 [Pipeline] { (Stop VM) 00:29:12.884 [Pipeline] sh 00:29:13.169 + vagrant halt 00:29:15.712 ==> default: Halting domain... 00:29:21.020 [Pipeline] sh 00:29:21.302 + vagrant destroy -f 00:29:23.846 ==> default: Removing domain... 00:29:24.120 [Pipeline] sh 00:29:24.403 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:24.414 [Pipeline] } 00:29:24.428 [Pipeline] // stage 00:29:24.434 [Pipeline] } 00:29:24.448 [Pipeline] // dir 00:29:24.453 [Pipeline] } 00:29:24.468 [Pipeline] // wrap 00:29:24.475 [Pipeline] } 00:29:24.487 [Pipeline] // catchError 00:29:24.497 [Pipeline] stage 00:29:24.499 [Pipeline] { (Epilogue) 00:29:24.512 [Pipeline] sh 00:29:24.797 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:30.087 [Pipeline] catchError 00:29:30.089 [Pipeline] { 00:29:30.098 [Pipeline] sh 00:29:30.378 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:30.378 Artifacts sizes are good 00:29:30.518 [Pipeline] } 00:29:30.532 [Pipeline] // catchError 00:29:30.543 [Pipeline] archiveArtifacts 00:29:30.551 Archiving artifacts 00:29:30.648 [Pipeline] cleanWs 00:29:30.660 [WS-CLEANUP] Deleting project workspace... 00:29:30.660 [WS-CLEANUP] Deferred wipeout is used... 00:29:30.667 [WS-CLEANUP] done 00:29:30.669 [Pipeline] } 00:29:30.683 [Pipeline] // stage 00:29:30.688 [Pipeline] } 00:29:30.702 [Pipeline] // node 00:29:30.707 [Pipeline] End of Pipeline 00:29:30.756 Finished: SUCCESS